Tuesday, 27 August 2013

Decision-as-a-Service: Applying Analytics at the point of Capture (Part 2 of 2)

In my previous article, I introduced the concept of an Automated Decision Management (ADM) platform evolving from the likes of ECM and BPM as we look towards the future. Products within 3 distinct categories (capture, content management and output management) will join forces to create a new software and service category, augmented and enabled by super advanced semantic technologies.

The question is how far is this concept from reality?

Automated Decision Management: Barriers to Adoption

Many digital systems and technologies are impacted in realising an automated decision platform. The following key challenges need to be solved if this vision is to become reality.
  • The integration of structured and unstructured data systems, or preferably a content platform to handle both. The worlds of ERP and CRM need to merge with the likes of ECM and BPM
  • The integration of internal data systems with external data sources, particularly Social Media to monitor, understand and react to live consumer sentiment
  • The development of a new form of middleware hosting a number of adjacent technologies
  • The advancement of super-intelligent linguistics – technologies which understand human language and intent
  • The evolution of classic rules engines to dynamic decision engines capable of understanding and reacting to unpredictable events via adaptive business logic
  • The integration of outbound communications, including marketing automation and content composition platforms to inbound and process management platforms, such as capture and BPM

Conclusions & Outlook
  • Enterprise Content Management platforms will evolve from managing content, to managing processes, to managing decisions – streamlining interactions, shortening response times and identifying threats and opportunities faster than ever
  • A new form of intelligent system will begin to emerge – Automated Decision Management platforms will merge capabilities across a multitude of content-driven technologies to handle end-to-end human tasks
  • Intelligent systems will adapt to events on-the-fly – decisions will be made in real-time based on a combination of historical behavioural data and contextual data “in-motion”
  • Intelligent systems will also boast human-like senses, gaining the likes of machine vision and thus being able to “see the world for themselves” – opening up a whole new space for intelligent monitoring and surveillance systems – tracking consumer behaviour, deploying counter-terrorism, automating traffic management etc 
  • The Internet will continue to grow its knowledge base with businesses and consumers alike increasingly able to tap into this knowledge pool on-demand – offered as “services” via a subscription model

Organisations looking to get ahead of the game need to start thinking and planning ahead in order to gain long-term competitive advantage.

The following recommendations are made:
  • Understand the vision of the future, but don’t rush to get there overnight – systems need not be fully automated from day one
  • Develop the architecture to allow for expansion – be pragmatic, start small and look to add more advanced capabilities over time
  • Recognise the business case is not primarily about productivity, or cost cutting, but in driving superior customer experience, which directly drives increased revenue
  • Focus on optimising select business processes, each process justifying its own mini business case
  • Understand that a technology platform is not the single answer to antiquated business processes in need of a revamp, it is only part of the solution
  • Work with third-party experts to re-engineer business processes and transform operating models prior to technology selection and deployment    

Tuesday, 20 August 2013

Decision-as-a-Service: Applying Analytics at the point of Capture (part 1 of 2)


Many organisations have recognised the need to apply some kind of document and data capture technology on the journey to the digital enterprise. After all, you cannot realise the vision of the paperless office if you are still pushing paper around.

Leading organisations are now embarking on the next phase of their digital journey. For those that have laid the initial foundations, the transition to the digital enterprise is less about paper and more about data.

“Now that we are capturing and routing all inbound documents electronically – how do we make use of the actual content?”

From Digital Mailroom, to Customer Dialogue Management

The description of capturing and routing all inbound documents is of course the digital mailroom concept. Such systems point to proven ROI – speeding downstream business processes through automated classification and routing, thereby increasing the overall effectiveness of an organisation. Things get done quicker and smarter, with less effort, less paper and less cost. But what comes next?  

Looking at the bigger picture, all processes, communications, interactions and decisions, be that in a consumer or business capacity, have one thing in common – they share 3 core components – an input element, a processing element and an output element.  
  • Input – the data flowing into the central system from outside
  • Process – the action of manipulating the data into a more useful form
  • Output – presenting the data as information in a user-friendly way
In other words, the information flow moves from data, to insight, to action. 

In this sense, the Digital Mailroom, acting as the input element, represents only the start of the value chain and therefore just the beginning of any end-to-end business improvement initiative. Traditionally, the processing element “what is the value of this content, and therefore what do I have to do with it?”, or more simply, the decision point – has been a job for humans, but will increasingly become a job for machines as more advanced technologies emerge.

Organisations across the globe have realised the benefit of handling incoming content (paper, email, web, PDF) through one multi-channel content capture platform – standardising the capture process. But what about the response process?

Leading organisations are one step ahead and have recognised the strategic importance of the Digital Mailroom and its ability to drive customer-centric activities. The system is set to evolve from a purely admin-centric operation, stationed on the organisation periphery, to one placed at the heart of business operations driving high-value, high-impact customer dialogue.

Decision-as-a-Service: Automated Decision Management  

The concept of a fully automated input-process-output chain of events – touchless processing with minimal, or zero human interaction, is human-computer interaction (HCI) at its best and straight out of science fiction. Ask a question, receive an answer – in real-time. Sure, it is a futuristic one, but an increasingly realistic one when you begin to combine advanced technologies in recognition, analytics and linguistics.

Imagine a state-of-the-art system where incoming content (requests, enquiries, complaints, questions) is analysed with human-like interpretation to drive automated interactions, decisions and actions (answers, offers, recommendations, approvals) in near real-time or even real-time itself.   

Such a concept gets close to a real-time conversation platform between businesses, suppliers, customers, employees and artificial agents, with a myriad of one-to-one interactions in a futuristic, always-on, hyper-connected society. No waiting, no down-time.

Setting the Scene

As a customer of a large mobile phone provider, you would like to email customer services with a specific complaint – you are receiving spam text messages and you are requesting they put a stop to it within 2 days, otherwise you will switch provider.

In today’s environment, not only is this not possible due to a lack of multi-channel capabilities by the provider (they cannot handle email), it is also reliant on a human processor who has to understand the nature of your request (requires human logic), and who has a number of other tasks to handle (delayed response). Further, due to a lack of integration with your historical customer data the agent is unable to appease you when it matters most (loyalty programmes and special offers) and the net result is that you remain an unhappy customer (defecting to the competition).  

The business case stacks up when you multiply the issue across your customer base. Not only does the mobile phone provider have a potential customer attrition problem, it is also missing out on leveraging customer insight, in real-time, for up-sell / cross-sell opportunities. Reason enough to put forward a case for customer experience optimisation and the subsequent implementation of more advanced technological capabilities. The outcome: happier customers, healthier balance sheet.

Integrating Multi-Channel Input with Multi-Channel Output…and a whole lot in between

The idea of an automated business decision platform – driven by an integrated input-process-output chain of events sounds simple in practice, but in reality is rather complex. The truth is that there are a number of enterprise software applications implicated in this story, not least CRM, ERP and Database Management systems holding a plethora of historical data, which may, or may or may not need to be drawn upon at the “decision-point” in real-time.

In addition, the output element – or outbound communications component – is fragmented between the fast-growing marketing automation space and declining print media space. Traditional document composition systems will need to mature into multi-channel output systems (including mobile and voice capabilities) capable of creating meaningful content on-demand, as opposed to heavy-duty document production.

And last but not least, the glue that joins it all together, semantic analytics, acting as the central brain and decision-engine, remains a super-technology in early adoption. Even then we should not discredit human intervention – after all, we are still some way off true artificial intelligence, meaning intelligent systems will still require a form of human touch. It is likely that decision management platforms will initially surface as decision-support engines, running as a centralised enterprise service. Consequently, they will automate much of the business process, but still require human guidance.

Part 2 of this article will explore barriers to adoption, and key recommendations to move this concept towards reality. 

Thursday, 20 June 2013

Semantics Gives Computing A Whole New Meaning

In my previous article “Rise of Linguistics” I highlighted the increased focus on language technologies by the big IT firms. They predict the future of computing is where humans interact with intelligent machines through the use of natural language. And who would argue with that?  

Linguistics represents an entirely new software category, providing super-intelligent tools to enable a whole new class of applications. It is the door opener to a new era of computing and a money-spinning ticket to the future. This futuristic branch of computing is based on deep scientific principles – from anthropology to neuroscience, and a whole lot more.

Make no mistake; this remarkable engineering achievement will confine modern day computing to the dark ages. Computers will no longer be hamstrung by the restrictions placed upon their own unique, computer “language” – orchestrated exclusively by IT programmers. Instead, human-computer interaction will become the societal-norm, driven by its new interface, the human interface. The very fabric of computing will be changed forever.

The upcoming adoption of Semantics, a sub-discipline of Linguistics, will act as a proof point for this game-changing software category. Semantics, in the context of linguistics, aims to infer the meaning of language. This means that computers will understand human speech and text. This means that machines will be able to respond in kind. This means that Artificial Intelligence is coming…

Context, Context, Context – What Do You Actually Mean?

This means the world will communicate in one universal language thanks to real-time translation capabilities, both in speech and text. The “language barrier” will become all but a distant memory.

This means the internet will evolve from a static, unstructured information repository to a living artificial brain augmenting everyday life – providing you with knowledge and insight whenever you need it most.

This means computers will evolve from “dumb” terminals with one-way communication protocols, to interactive machines with intelligent two-way dialogue. Say hello to your new best friend.

This means everyday appliances will be voice-controlled – buttons, switches, and physical commands will disappear for good. You will tell your appliance what you want it to do. And it will do it.

This means business applications will receive a welcome face-lift as language capabilities provide a new layer of human-like intelligence, the first of its kind. Automation just got automated.

This means social applications will become more instinctive, more interactive, more helpful. More social. Your very own personal assistant awaits.

This means a whole new class of applications will be made possible. Digital forensics.  Counter-terrorism surveillance. Customer (self) service. Expert-as-a-service. Everything-as-a-service…the list is endless.

This means Internet content will be automatically tagged, linked, clustered and categorised – the semantic web will move from concept to reality.

This means marketing will undergo step-change. Customer data will be extracted like a gold mine – empowering highly tailored, highly specific one-to-one communication, operating in real-time. Every customer touch point will be redefined.

This means search engines will be transformed overnight – keyword-based queries will be a thing of the past as meaning-driven search pinpoints your exact intent. Say goodbye to endless pages of irrelevant search results and say hello to precision.

This means multi-lingual search will become the norm. Search queries will crawl data across multiple languages, returning results in the language of your choice, of course. If it exists, it will be found. If it doesn’t, you’ll not miss a thing. 

This means analytics will be taken to a whole new level. Masses of unstructured data will be bought to life – content understanding, fact extraction, evidence gathering, relationship mapping, sentiment analysis, concept search – impossible to achieve through human analysis. Easy to achieve through Artificial Intelligence.

This means, I think I made the point. Welcome to the future.   

Tuesday, 18 June 2013

Rise of Linguistics – IBM, Google & Microsoft in $100 billion+ Race

Big Data and Analytics dominate column inches, blogs, industry events, office conversations, business strategies, and the like, but Tech Giants IBM, Microsoft and Google are already busy uncovering the next big gem in IT. Step forward Linguistics.

Linguistics is a relatively young scientific study that seeks to understand the true building blocks of human language – something that has eluded even the most respected evolutionary biologists of today. Some consider it almost impossible to explain the origins of human language but that has not stopped the budding Linguistics community asking a series of searching questions. How and where did language originate? How did we come to understand language? What is the role of the brain? What are the intricate rules of language?   

The Computer Science discipline attempts to consider all of this in the context of computer consumption – no mean feat for such an elusive specialism. To be a little more specific, that is to reverse engineer human language so that computers can understand, act upon and respond to human intent. Genius. Long the realm of science fiction, the two-way converse between man and intelligent machines is fast moving towards science fact. Artificial Intelligence is on its way, and boy, is it going to be worth the wait. 

Natural Language Processing Edges Closer

The Linguistics field of study, often referred to as natural language processing (NLP) or natural language understanding (NLU) in computer science, is rapidly evolving after decades spent in R&D labs, at the cost of a small fortune. Whilst it is true NLP exists in part today, it is exactly that. “Approximations” best describe current state capabilities, a combination of statistical analysis, rule-based methods and heuristics (a type of shortcut technique), helping arrive at estimated results. Mathematics not Science, if you will. 

The problem being it fundamentally lacks “intelligence” and therefore is not fit for purpose. At least not mainstream consumption anyway. Current methods fail to grasp the true “meaning” of language – a pre-requisite for Artificial Intelligence. To do so, you must understand grammar (structure), morphology (formation of words), syntax (formation of sentences from words) and most importantly of all, semantics (the relationships between words and sentences that forms “meaning”).

And if you didn’t have to read that twice, you are doing well. Very well. Add in the complexities of language ambiguity, idiosyncrasy and sheer global variety and you’d be forgiven if your head starts spinning. Further consider that exact details are not explicitly coded in the language we use – in fact, much of our understanding is formed from our knowledge of the real world, learned through time – and you quickly come to realise that language is only part of the problem. The point being, this is an immensely complex subject domain, offering some explanation as to why this field remains very much in R&D mode.

Nonetheless, NLP is an area moving forward with speed as key players look beyond the traditional statistical modeling approach and look towards natural science for answers; including biology, anthropology, psychology and neuroscience amongst others. The complex understanding of linguistics, gained through decades of interdisciplinary scientific study, is about to ensure Natural Language Processing gets a serious facelift.

Race For Glory

When the Tech Giants ramp up efforts, you can be sure the space is hotting up, each trying to reach the sea of gold that awaits. And that’s exactly what the likes of IBM, Microsoft, and now Google, have been busy doing. IBM and Microsoft have long been doing battle in the labs, whilst Google, a relative newcomer to the party, have also joined the race. 

So, what of the challengers?

Well, IBM leads the way with their creation of “Watson”, the quiz-winning supercomputer that beats humans at their own game. Using a unique combination of natural language interpretation, cognitive-style learning and hypothesis-based decisioning Watson has the potential to define a revolutionary new service category – “experts-as-a-service” in highly specialist domains where decision-making is vital.

For example, Watson is currently “in training” as a medical practitioner, with the end-goal of providing physicians with decision-support in the patient diagnosis phase. Watson will help to understand symptoms through the mass analysis of medical research data and application of patient history.

Whilst IBM could be considered most advanced in terms of an end-to-end decision-support platform, other players are focusing on key components in the decision chain, such as NLP in its entirety. That can be said for Microsoft who, not to be outdone, recently wowed a Chinese audience with a demo of real-time speech translation based on its own NLP developments. The Language Translation market alone is projected to reach $100 billion by 2020, signaling the immense potential of NLP technology, especially when you consider this is merely one application of many.

And if IBM and Microsoft thought they were the only two players in town, then Google will no doubt have something to say. The recent high-profile hire of Artificial Intelligence thought-leader Ray Kurzweil to head up Google’s Natural Language Processing Group is a sure fire statement of intent, as is the follow-up acquisition of summarisation vendor Wavii, perhaps a sign of further things to come.

Yet that only tells half of the story, at least if you emanate from Russia. In what appears to be a classic tale of Russia vs America in yet another race for glory, the Russians boast a historic association with linguistics research dating way back to the 1950s.

And it doesn’t end there. Niche Russian software vendor ABBYY, better known for its market-leading OCR and data capture capabilities, is understood to have made significant progress on what it calls a “universal linguistic platform”, claiming a first-of-its-kind, having been in stealth mode since its founding days, some 18 years previous. That’s a rather large research project. Information remains at a premium with the company rumoured to be gearing up for launch.

In my next article I will focus more specifically on the coming impact of Linguistics, including use cases of emerging semantic technologies and how these applications will completely redefine the computing landscape. Stay tuned. 

Robot DNA: Building Blocks For Artificial Life

In my previous article I wrote about the changing computing landscape, the game-changing technologies on the horizon and the arrival of a new frontier, Computing 2.0. Computing 2.0 represents not only a new age for computing but a whole new world for mankind.

In this particular piece I will explore some of the key game-changing technologies in the R&D stable, highlighting how each pioneering component helps to create a form of Robot DNA. In the near future, once these technologies mature and become unified, an intelligent robotic platform will begin to emerge. From here, the building blocks for artificial life will edge ever closer.

However, rather than speculate on if, or indeed when that may be, this article will spotlight the key science and technology disciplines converging to create a futuristic robotic-driven world.

Computer Science Meets Natural Science

Perhaps not surprisingly, each and every one these emerging technologies originate from the study of natural sciences. Bionics is considered an umbrella term at the forefront of this mission, defined by Wikipedia as “the application of biological methods and systems found in nature to the study and design of engineering systems and modern technology”. In other words, winning aspects of nature are studied, modelled and reverse-engineered for human benefit, a type of biology-inspired computing.  

Research studies continue to focus on human cognition and consciousness, the ways in which make us uniquely human, and whilst we are yet to see the fruits on such intense and complex explorations, a number of exciting developments are bubbling in the R&D pipeline.

Emerging Technology Spotlight

The concept of Artificial Neural Networks (ANN), “a computational simulation of a biological neural network” is set to play a key role in the evolution of Cognitive Computing. In short, ANN is a type of decision-making technology mimicking functions of the brain, bringing us one step closer to the concept of the artificial brain. When you also add Linguistics & Semantic Computing into the mix, these interrelated disciplines form a powerful brain-like alternative.   

Linguistics, the study of human language, is a little known science discipline outside of its buzzing community. But not for long. Long the realm of science fiction and a key staple of Artificial Intelligence, this is one area set to truly revolutionise the way we live and work in the not so distant future. The goal of computer linguistics: to teach computers to understand and act upon human language, bringing intelligent human-computer, computer-human interaction to life.

Real-time speech translation will ensure the world talks in one universal language, powering a new wave of globalisation. This will merely be a sign of things to come. A plethora of new question-answer applications will be born, powering super-intelligent cognitive systems that supplement, and even replace, some of the most complex reasoning and analytical tasks and jobs undertaken by humans today. 

Semantics, a sub-discipline of the Linguistics domain, will initially drive up interest in this field as it seeks to bring meaning to a growing world of unstructured information – automatically. In theory, everything and anything will be linked, tagged and categorised with such consummate precision, that intelligent search and information discovery will be taken to unimaginable new levels. A world knowledge model will be developed in the process, powering all kinds of super-intelligent applications, a layer of human-like intelligence confining unstructured information issues towards the fate of the Dinosaurs. Extinction.  

And finally, the concept of Super Machine Vision will breathe life into intelligent machines, as computers surpass the abilities of the human vision system. Computers will be able to detect, analyse and assess their environment, in the same dynamic manner as humans. Sub-disciplines such as Biometrics will also increasingly play an important part – automating the identification of humans by their unique characteristics and traits. A whole new world of intelligent computing applications will be made possible.

Age of Robotics

As we look to the future, a number of transformational technology innovations will join forces to announce the age of robotics, signalling a profound shift in global computing. In summary:
  • Computers will acquire a form a speech (speech to text, and text to speech) via rapidly maturing Speech Recognition
  • Computers will acquire a form of understanding and comprehension through long awaited, breakthrough Linguistics and Natural Language Processing technology
  • Computers will acquire a form of sight through Super Machine Vision that will recognise, analyse and detect real-world objects, people and things
  • And Cognitive Computing & Artificial Neural Networks will simulate the power of the human brain, giving rise to artificial brain-like decision and reasoning engines 

These technologies originate from the very heart of human consciousness, our innate senses – the way in which we see, hear and understand – and, as a race, we are rapidly making strides to reverse engineer these unique capabilities for computer consumption.


Planet Earth is certainly going to be an interesting place to be. 

Computing 2.0 – Dawn of a New Era

The 21st Century Computing Revolution

The industrial revolution of the late 18th, and early 19th century, marked a major turning point in history. From the large scale production of chemicals, to steam-powered transportation, to the development of machine tools. Breakthrough inventions were to have a significant knock-on effect, a key enabler and door opener for new innovations that would otherwise have been impossible. Almost every aspect of daily life was influenced in some way.

Fast forward to the current day and we are living and breathing a new revolution – the computing revolution. Yet the computing revolution remains very much a work in progress. Whilst computers and the internet have revolutionised the way we work, communicate and collaborate, the best, or worst (depending on your viewpoint), is yet to come.

The Changing Face of Computing – Get Set For A Major Upgrade

Cloud computing, social media and mobile devices are certainly leading the way when we look to the devices and platforms that will enable ubiquitous computing. But it will take an entirely new set of technologies that will truly change the face of computing, powering mankind to the next level of civilisation in the process.

These technologies originate from the very heart of human consciousness – the way in which we see, understand and act. The ways in which make us uniquely human. Yet incredibly, we are rapidly making strides to reverse engineer these capabilities for computer consumption, ultimately blurring our own vision of reality. Welcome to the world of Computing 2.0, a world of interactive, immersive and intelligent computing.

In today’s Computing 1.0 ecosystem human-computer dialogue is a one-way ticket consisting of multiple “on-off” interactions which lack real-time capabilities. You cannot converse with computers in two-way, human-like fashion. You are not eternally plugged-in to a computer-driven environment. You do not receive instantaneous responses to all manner of things. And you simply power-down once finished.

In stark contrast, the emerging Computing 2.0 landscape will give rise to an always-on society living in a permanent mixed-world state – the real world augmented with information, optical illusions and the wildest imaginations of your choice. Computer-driven interactions will be weaved into everyday life, increasingly operating 24/7, with zero down-time. The level of human-computer interaction will increase to unprecedented intelligent levels – a psychological dilemma in waiting – questions arising as to who and what we are actually conversing with. Human being or virtual agent? What is real and what is not?

We will increasingly interact and converse with language-savvy computers as machines become man’s best friend. Looking for advice? Have a burning question? No problem. Ask away.   

The gap between humans and machines will become increasingly blurred as we migrate towards a permanently-connected computing world, powered by intelligent two-way human-computer dialogue. Wearable computing devices will become the norm as we create a perpetual, superimposed, artificial society.  What will start out as a pair of protruding glasses will quickly retreat to less-than-obvious contact lenses, eventually disappearing entirely, as microscopic devices become the computing platform of choice.

Version Change… And A New Era For Mankind

Computing 2.0 will represent a seismic shift in the way in which we consume IT, as we become increasingly reliant, increasingly engaged, and increasingly entangled in technology. Such a dependence will be driven by a series of game-changing technologies gathering pace on the horizon – not yet here, but well on their way. 

Collectively, this set of technologies is poised to disrupt not only the IT and communications industry, but moreover, change the game for civilisation as we know it. And as this new era of civilisation emerges, any type of virtual reality experience will become possible, often indistinguishable from the real thing.  

Rise Of The Robot

So, where are we headed? Well, the emerging Computing 2.0 landscape will pave the way for the rise of the robot, the ultimate prize. The infrastructure, the hardware, the software, the algorithms – each component combining and integrating to create human-like features as machines learn language, develop sight, adopt hearing and embrace a sense of touch. When Robotics finally announces its mainstream introduction, much like the earlier industrial revolution, almost every aspect of daily life will be influenced in some way – this time on an even grander scale.

The subsequent rise of Robotics will indisputably go down as a key milestone in human history. It will be known as the era where truly disruptive technology transformed our world before our very eyes, an era where science fiction became science fact.