Total Pageviews

Saturday 17 February 2018

Dimensions of Complexity of AI


Dimensions of Complexity
Agents acting in environments range in complexity from thermostats to companies with multiple goals acting in competitive environments. A number of dimensions of complexity exist in the design of intelligent agents. These dimensions may be be considered separately but must be combined to build an intelligent agent. These dimensions define a design space of AI; different points in this space can be obtained by varying the values of the dimensions.
Here we present nine dimensions: modularity, representation scheme, planning horizon, sensing uncertainty, effect uncertainty, preference, number of agents, learning, and computational limits. These dimensions give a coarse division of the design space of intelligent agents. There are many other design choices that must be made to build an intelligent agent.
·         1 Modularity
·         2 Representation Scheme
·         3 Planning Horizon
·         4 Uncertainty
o    4.1 Sensing Uncertainty
o    4.2 Effect Uncertainty
·         5 Preference
·         6 Number of Agents
·         7 Learning
·         8 Computational Limits
·         9 Interaction of the Dimensions

8 Computational Limits

Sometimes an agent can decide on its best action quickly enough for it to act. Often there are computational resource limits that prevent an agent from carrying out the best action. That is, the agent may not be able to find the best action quickly enough within its memory limitations to act while that action is still the best thing to do. For example, it may not be much use to take 10 minutes to derive what was the best thing to do 10 minutes ago, when the agent has to act now. Often, instead, an agent must trade off how long it takes to get a solution with how good the solution is; it may be better to find a reasonable solution quickly than to find a better solution later because the world will have changed during the computation.
The computational limits dimension determines whether an agent has
  • perfect rationality, where an agent reasons about the best action without taking into account its limited computational resources; or
  • bounded rationality, where an agent decides on the best action that it can find given its computational limitations.
Computational resource limits include computation time, memory, and numerical accuracy caused by computers not representing real numbers exactly. An anytime algorithm is an algorithm whose solution quality improves with time. In particular, it is one that can produce its current best solution at any time, but given more time it could produce even better solutions. We can ensure that the quality doesn't decrease by allowing the agent to store the best solution found so far and return that when asked for a solution. However, waiting to act has a cost; it may be better for an agent to act before it has found what would have been the best solution.

figures/ch01/anytimeplot.gif
Figure 1.5: Solution quality as a function of time for an anytime algorithm. The agent has to choose an action. As time progresses, the agent can determine better actions. The value to the agent of the best action found so far, if it had been carried out initially, is given by the dashed line. The reduction in value to the agent by waiting to act is given by the dotted line. The net value to the agent, as a function of the time it acts, is given by the solid line.

Example 1.9: Figure 1.5 shows how the computation time of an anytime algorithm can affect the solution quality. The agent has to carry out an action but can do some computation to decide what to do. The absolute solution quality, had the action been carried out at time zero, shown as the dashed line at the top, is improving as the agent takes time to reason. However, there is a penalty associated with taking time to act. In this figure, the penalty, shown as the dotted line at the bottom, is proportional to the time taken before the agent acts. These two values can be added to get the discounted quality, the time-dependent value of computation; this is the solid line in the middle of the graph. For the example of Figure 1.5, an agent should compute for about 2.5 time units, and then act, at which point the discounted quality achieves its maximum value. If the computation lasts for longer than 4.3 time units, the resulting discounted solution quality is worse than if the algorithm just outputs the initial guess it can produce with virtually no computation. It is typical that the solution quality improves in jumps; when the current best solution changes, there is a jump in the quality. However, the penalty associated with waiting is often not as simple as a straight line.

To take into account bounded rationality, an agent must decide whether it should act or think more. This is challenging because an agent typically does not know how much better off it would be if it only spent a little bit more time reasoning. Moreover, the time spent thinking about whether it should reason may detract from actually reasoning about the domain. However, bounded rationality can be the basis for approximate reasoning.

9 Interaction of the Dimensions

Figure 1.6 summarizes the dimensions of complexity. Unfortunately, we cannot study these dimensions independently because they interact in complex ways. Here we give some examples of the interactions.

DimensionValues
Modularityflat, modular, hierarchical
Representation schemestates, features, relations
Planning horizonnon-planning, finite stage,
indefinite stage, infinite stage
Sensing uncertaintyfully observable, partially observable
Effect uncertaintydeterministic, stochastic
Preferencegoals, complex preferences
Learningknowledge is given, knowledge is learned
Number of agentssingle agent, multiple agents
Computational limitsperfect rationality, bounded rationality
Figure 1.6: Dimensions of complexity

The representation dimension interacts with the modularity dimension in that some modules in a hierarchy may be simple enough to reason in terms of a finite set of states, whereas other levels of abstraction may require reasoning about individuals and relations. For example, in a delivery robot, a module that maintains balance may only have a few states. A module that must prioritize the delivery of multiple parcels to multiple people may have to reason about multiple individuals (e.g., people, packages, and rooms) and the relations between them. At a higher level, a module that reasons about the activity over the day may only require a few states to cover the different phases of the day (e.g., there might be three states: busy time, available for requests, and recharge time).
The planning horizon interacts with the modularity dimension. For example, at a high level, a dog may be getting an immediate reward when it comes and gets a treat. At the level of deciding where to place its paws, there may be a long time until it gets the reward, and so at this level it may have to plan for an indefinite stage.
Sensing uncertainty probably has the greatest impact on the complexity of reasoning. It is much easier for an agent to reason when it knows the state of the world than when it doesn't. Although sensing uncertainty with states is well understood, sensing uncertainty with individuals and relations is an active area of current research.
The effect uncertainty dimension interacts with the modularity dimension: at one level in a hierarchy, an action may be deterministic, whereas at another level, it may be stochastic. As an example, consider the result of flying to Paris with a companion you are trying to impress. At one level you may know where you are (in Paris); at a lower level, you may be quite lost and not know where you are on a map of the airport. At an even lower level responsible for maintaining balance, you may know where you are: you are standing on the ground. At the highest level, you may be very unsure whether you have impressed your companion.
Preference models interact with uncertainty because an agent must have a trade-off between satisfying a major goal with some probability or a less desirable goal with a higher probability. This issue is explored in Section 9.1.
Multiple agents can also be used for modularity; one way to design a single agent is to build multiple interacting agents that share a common goal of making the higher-level agent act intelligently. Some researchers, such as Minsky (1986), argue that intelligence is an emergent feature from a "society" of unintelligent agents.
Learning is often cast in terms of learning with features - determining which feature values best predict the value of another feature. However, learning can also be carried out with individuals and relations. Much work has been done on learning hierarchies, learning in partially observable domains, and learning with multiple agents, although each of these is challenging in its own right without considering interactions with multiple dimensions.
Two of these dimensions, modularity and bounded rationality, promise to make reasoning more efficient. Although they make the formalism more complicated, breaking the system into smaller components, and making the approximations needed to act in a timely fashion and within memory limitations, should help build more complex systems.

Monday 12 February 2018

Sparkling Water = H20 + Apache Spark






H20 – The Killer-App on Apache Spark

In-memory big data has come of age. The Apache Spark platform, with its elegant API, provides a unified platform for building data pipelines. H2O has focused on scalable machine learning as the API for big data applications. Spark + H2O combines the capabilities of H2O with the Spark platform – converging the aspirations of data science and developer communities. H2O is the Killer-Application for Spark.

Backdrop
Over the past few years, we watched Matei and the team behind Spark build a thriving open-source movement and a great development platform optimized for in-memory big data, Spark. At the same time, H2O built a great open source product with a growing customer base focused on scalable machine learning and interactive data science. These past couple of months the Spark and H2O teams started brainstorming on how to best combine H2O’s Machine Learning capabilities with the power of the Spark platform. The result: Sparkling Water.

Sparkling Water
Users can in a single invocation and process, get the best of Spark – its elegant APIs, RDDs, multi-tenant Context and H2O’s speed, columnar-compression and fully-featured Machine Learning and Deep-Learning algorithms.
One of the primary draws for Spark is its unified nature, enabling end-to-end building of API’s within a single system. This collaboration is designed to seamlessly enable H20’s advanced capabilities to be part of that data pipeline. The first step in this journey is enabling in-memory sharing through Tachyon and RDDs. The roadmap includes deeper integration where H2O’s columnar-compressed capabilities can be natively leveraged through ‘H2ORDD’.
Today, data gets parsed and exchanged between Spark and H2O via Tachyon. Users can interactively query big data both via SQL and ML from within the same context.
Sparkling Water enables use of H2O’s Deep Learning and Advanced Algorithms for Spark’s user community. H2O as the killer-application provides a robust machine learning engine and API for the Spark Platform. This will further empower application developers on Spark to build intelligent and smarter applications.
MLlib and H2O: The Triumph of Open Source!



MLlib is a library of efficient implementations of popular algorithms directly built using Spark. We believe that enterprise customers should have the choice to select the best tool for meeting their needs in the context of Spark. Over time, H2O will accelerate the community’s efforts towards production ready scalable machine learning. Fast fully featured algorithms in H2O will add to growing open source efforts in R, MLlib, Mahout and others, disrupting closed and proprietary vendors in machine-learning and predictive analytics.
Natural integration of H2O with the rest of Spark’s capabilities is a definitive win for enterprise customers.
Demo Code
package water.sparkling.demo

import water.fvec.Frame
import water.util.Log
import hex.gbm.GBMCall.gbm

object AirlinesDemo extends Demo {
  override def run(conf: DemoConf): Unit = {
    // Prepare data
    // Dataset
    val dataset = "data/allyears2k_headers.csv"
    // Row parser
    val rowParser = AirlinesParser
    // Table name for SQL
    val tableName = "airlines_table"
    // Select all flights with destination == SFO
    val query = """SELECT * FROM airlines_table WHERE dest="SFO" """

    // Connect to shark cluster and make a query over prostate, transfer data into H2O
    val frame:Frame = executeSpark[Airlines](dataset, rowParser, conf.extractor, tableName, query, local=conf.local)
    Log.info("Extracted frame from Spark: ")
    Log.info(if (frame!null) frame.toString + "\nRows: " + frame.numRows() else "")

    // Now make a blocking call of GBM directly via Java API
    val model = gbm(frame, frame.vec("isDepDelayed"), 100, true)
    Log.info("Model built!")
  }

  override def name: String = "airlines"
}


Friday 25 August 2017

A mark of one's existence - records in the blockchain Bitcoin


Part of being a human is wanting to leave a mark on the world. Within us lies the deep need to be remembered, in some form, after we die. We see it in the Cueva de las Manos - Cave of the Hands, where the inhabitants left the outlines of their hands painted on the walls as early as 13'000 years ago.

Cave of the Hands

We hear the same plight from Horace some 2000 years ago in his Odes when he states "non omnis moriar" - "not all of me will die". Similarly, to condemn someone to be forgotten was a fate worse than death for the ancient Romans. It was called damnatio memoriae, or "the condemnation of memory".

In our digital age it is perhaps easier than ever to remove someone from history. While it is easier than ever to record what's going on, it is similarly just as easy to alter and distort the events thanks to tools like Photoshop.


Photos can be altered, memories can be called into question, records could be rewritten, and we can end up with the Mandela Effect. Add to it the right to be forgotten, and soon it might be hard to believe any record or lack of it on the Internet. George Orwell would be proud of what we could do to make someone an unperson.

Everything could be subject to change. Everything that is, except blockchains.

Proof of Existence


While working at Factom I heard a great tagline - "It is hard to guess today what lie you want to tell tomorrow". It might be a very profound statement in today's world of digital records - if you can't backdate, alter historical records or the like, you'd better be completely sure how you want to proceed ahead of time.

All of this is of course only possible through the Proof of Existence and the blockchain technology. Only networks such as Bitcoin or Ethereum can be seen as objective records of history anymore. They alone are big enough to be secure from tampering (if you can't 51% attack the blockchain, you can't rewrite the history) and public enough to ensure any attempt at tempering with them will be a publicly known event. Because of that, any data embedded in the blockchain will remain unchanged and hopefully preserved as long as the blockchain persists.

Record of my data


Today is my 30th birthday, and I decided to celebrate with a little experiment.

A few months back I contacted the Personal Genome Project Canada to participate in their research and get my genome sequenced. It has been an interesting experience, and I did find some correlation between my genetic predispositions and the health quirks I've been experiencing my whole life.

During the study I requested a copy of my sequenced genomic data. It was shipped to me on an external hard drive as the files themselves were 200GB. After leaving my computer to crunch the numbers, the SHA256 results was spit out - "de7a8430be51538ebcdd031390e0de3f7cde74a9c88a76e64406e88b6259d4fe". That was the hash of my genetic information - probably the most elegant version of a digital hand print I could find.

After playing with the debug options in BitcoinQT, I managed to wrap it up neatly in the transaction 32a0f8febb0f9f9c7fe1ce9a6b2a59356f443e27186d2e4b5c5a9a3e5e16f4cd, sent from my two favourite addresses - 17TQLZvXjKTrUyRnV9DuQs4RVDgNjUPeXQ, the address in which I received my first coins in 2011, and 1PiachuEVn6sh52Ez7o6Fymvw54qvQ4RBm, my own geeky little vanity address. And so, in block 1597975 (000000000000000000b43bb4162374befa73a882efa6279d87cd3f11548cff59) my transaction was anchored and became part of the blockchain history, along things like the blockchain marriagea tribute to Len Sassaman, and the infamous Times headline chosen by Satoshi Nakamoto. To the best of my knowledge, I'm the first person to have embedded a hash of their full genetic information this way.

It wasn't my first foray into embedding data into the Bitcoin history. That honour had to go to the illegal number from 2012 that was done as part of my master thesis research.

Larger records of data


Admittedly, the process of saving the data into the Bitcoin blockchain was a bit complicated. Preparing the inputs by hand, making sure the data itself is fairly small, it can all be rather limiting and potentially get expensive with larger amounts of records. Hence why it might be worthwhile to consider protocols that extend the Bitcoin protocol, while still offering the same cryptographic proof of existence. In comes Factom (full disclosure: I work for Factom).

With the intent of storing the same data, I created a new chain with my name and alias - ef020b0dc14223ca454cb69b36143ffbafa8b09c0ff962b18742cd97a02735c9. The hash was anchored in transaction cdeb46cad69c01f79864e20a56cb227b94c9738b79d8291e4181f5cbd9b86f27 that became part of the block 96731 (d8fea7d7df13f0e629817a552719a7e7e9860023313ddaa5fa76ad34d655ace1).

Now, there is an extra step that needs to be taken between here and Bitcoin - the anchoring process. That is performed externally by the an automatic server. It created a transaction bebfc29801239ad254da97b253c864736257143f17e3519e03e05e3761f57a8f that made its way into the Bitcoin block 1598016. And here comes the magic trick that gets us between a Factom transaction into a Bitcoin block, "the receipt":
{
   "receipt":{
      "entry":{
         "entryhash":"cdeb46cad69c01f79864e20a56cb227b94c9738b79d8291e4181f5cbd9b86f27"
      },
      "merklebranch":[
         {
            "left":"cdeb46cad69c01f79864e20a56cb227b94c9738b79d8291e4181f5cbd9b86f27",
            "right":"0000000000000000000000000000000000000000000000000000000000000003",
            "top":"07e5e997757ce1c4e935aecff3e1fb4bb9f7c466329de38ae19c342106283e7b"
         },
         {
            "left":"c48f1c742f8aea8c834b07615776e6c9f79d2300b4e1eb29ea6e295a55823402",
            "right":"07e5e997757ce1c4e935aecff3e1fb4bb9f7c466329de38ae19c342106283e7b",
            "top":"6d423f0c963ce0a9744eec94e07816263b82c1514c048fc43c791e19a44b7458"
         },
         {
            "left":"ef020b0dc14223ca454cb69b36143ffbafa8b09c0ff962b18742cd97a02735c9",
            "right":"6d423f0c963ce0a9744eec94e07816263b82c1514c048fc43c791e19a44b7458",
            "top":"d985f353aa34b1b7f021a30816019eac3cfd486743eb81b63295d12e7aa182f6"
         },
         {
            "left":"76c2296711dfc90eff2cec432b5592155ce13c4bd0f9cc15b01f842994358f35",
            "right":"d985f353aa34b1b7f021a30816019eac3cfd486743eb81b63295d12e7aa182f6",
            "top":"cb75287e2e1b170e5f5dc99ae7b738139305ad822e0c311cbdfb82ab0fa5d31d"
         },
         {
            "left":"3c00225e5d9f6d5e62c2926c02c5c03c31eaa831ee48d6e216dbe3b637125665",
            "right":"cb75287e2e1b170e5f5dc99ae7b738139305ad822e0c311cbdfb82ab0fa5d31d",
            "top":"fd03d8be680bb8c36ba01f224c71160f934c732a42de1c6d1d106b678e0f23a6"
         },
         {
            "left":"fabbd3f11bb85847530a6493361f3654d8617ab82ea3e34ddcc337c976917ec9",
            "right":"fd03d8be680bb8c36ba01f224c71160f934c732a42de1c6d1d106b678e0f23a6",
            "top":"92545cf4f7485731b6ee9007f9d3348759cd2edda60a9e5e7bc6ef2fa4f11cd1"
         },
         {
            "left":"35f75955731e0cfd98653a5979c6e53a0e97cd49ae91b06ec31001a96625666c",
            "right":"92545cf4f7485731b6ee9007f9d3348759cd2edda60a9e5e7bc6ef2fa4f11cd1",
            "top":"4c9d45d122337f6a85084b1492bbd3fe5fcd8a2bbfc71e7bacf283668fa0770b"
         },
         {
            "left":"4c9d45d122337f6a85084b1492bbd3fe5fcd8a2bbfc71e7bacf283668fa0770b",
            "right":"d9d488d0ddc24aae887d86ce094de1579fe10ce06e8f6b8cdb434f45c8d0cdcd",
            "top":"c0ee8f8410515485de6ca7831dcd09856e08ec89799cf90778ea3211b41b4ba5"
         },
         {
            "left":"e327276f2bbfa0bb9dc9d89095abcb0fe7dc3373a31392892099824c89c332a4",
            "right":"c0ee8f8410515485de6ca7831dcd09856e08ec89799cf90778ea3211b41b4ba5",
            "top":"d8fea7d7df13f0e629817a552719a7e7e9860023313ddaa5fa76ad34d655ace1"
         }
      ],
      "entryblockkeymr":"6d423f0c963ce0a9744eec94e07816263b82c1514c048fc43c791e19a44b7458",
      "directoryblockkeymr":"d8fea7d7df13f0e629817a552719a7e7e9860023313ddaa5fa76ad34d655ace1",
      "bitcointransactionhash":"bebfc29801239ad254da97b253c864736257143f17e3519e03e05e3761f57a8f",
      "bitcoinblockhash":"000000000000000000746bcc20463036af6deb09931d78fbd02546042b80f1d1"
   }
}

While it might look like gibberish, it's a simplified payment verification-style merkle branch leading from the transaction hash through the entry block key merkle root, the directory block key merkle root, up to the Bitcoin transaction itself. As the chain of hashes is complete, one is able to mathematically prove that the transaction indeed made its way into the Factom block and got anchored into the Bitcoin blockchain.

The same mechanism could be used to anchor data such as text into the blockchain, for example securing entire blog posts to prove they existed unaltered in their current state at a given point in time. I intend on doing that for this blog once I narrow down the ideal format, but that's a story for another day.

Conclusion


Bitcoin is probably the first, objective, immutable record of history we have. Any data saved into the blockchain will hopefully remain preserved for a long time. It is possible to extend the Proof of Existence into larger data sets without needlessly expanding the Bitcoin blockchain.

Wednesday 22 March 2017

Artificial intelligence and cognitive computing: the what, why and where

Although artificial intelligence (as a set of technologies, not in the sense of mimicking human intelligence) is here since a long time in many forms and ways, it’s a term that quite some people, certainly IT vendors, don’t like to use that much anymore – but artificial intelligence is real, for your business too.
Instead of talking about artificial intelligence (AI) many describe the current wave of AI innovation and acceleration with – admittedly somewhat differently positioned – terms and concepts such as cognitive computing or focus on several real-life applications of artificial intelligence that often start with words such as “smart” (omni-present in anything Internet of Things as well), “intelligent”, “predictive” and, indeed, “cognitive”, depending on the exact application – and vendor. Despite the term issues, artificial intelligence is essential for and in, among others, information management, medicine/healthcare, data analysis, digital transformation, security (cybersecurity and others), various consumer applications, scientific advances, FinTech, predictive systems and so much more. An exploration.
Artificial intelligence concept

The historical issue with artificial intelligence – is cognitive better?

There are many reasons why several vendors doubt using the term artificial intelligence for AI solutions/innovations and often package them in another term (trust us, we’ve been there). Artificial intelligence (AI) is a term that has somewhat of a negative connotation in general perception but also in the perception of technology leaders and firms.
One major issue is that artificial intelligence – which is really a broad concept/reality, covering many technologies and realities – has become like a thing we talk about and also seem to need to have an opinion/feeling about, with thanks to, among others, popular culture. Hollywood loves AI (or better: superintelligence, not the same). It makes for good sci-fi blockbusters and movies where non-human ‘things’ such as robots take over the world. The fact that AI is such a broad concept leads to misunderstandings about what it exactly means. Some people are really speaking about machine learning when they talk about AI. Others essentially talk about analytics and in doomsday movie scenarios everything gets mixed, including robotics and superintelligence, something we don’t have yet. And in most cases we really talk about some form of AI.
Fast growing AI technologies for consumer facing industries include chat bots and Virtual Personal Assistants (VPA) and smart advisors. Source
This phenomenon goes hand in hand with the fact that artificial intelligence has failed to deliver upon expectations from previous ‘popularity waves’ (going back to the previous Millennium, see box below this article) and is really old as a concept, research field and set of technologies – making it less appealing for many vendors, as obviously AI technologies and applications, as well as expectations, have evolved, albeit less than some like us to believe.
Still, deep learning, image recognition, hypothesis generation, artificial neural networks, they’re all real and parts are used in various applications. According to IDC, cognitive computing is one of six Innovation Accelerators on top of its third platform and the company expects global spending on cognitive systems to reach nearly $31.3 billion in 2019. You’ll notice IDC speaks about cognitive too (more about the meaning of cognitive systems as an innovation accelerator further in this article).
Artificial intelligence is being used faster in many technological and societal areas although there is quite some hype about what “it” can do from vendors. Still, the increasing attention and adoption of forms of AI in specific areas triggers debates about how far we want it to go in the future. Prominent technology leaders have warned about the danger and think tanks and associations have been set up to think about and watch over the long-term impact of AI (and robotics) with dicussions on the future of humanity and the impact of superintelligence but also, closer to today’s concerns, impact of automation/AI/robots on employment. Anyway, it again adds to that mix of ingredients that creates the conditions to strengthen the negative connotation regarding the term artificial intelligence – and as current political shifts show automation/digitalization as a whole.
If it makes us feel more comfortable to talk about “intelligent”, “cognitive” or “smart” anything, so be it.  What matters more is how artificial intelligence is here and increasingly will be, why it’s here, how it helps and is used and what it can mean for you.

Artificial intelligence in context: how it interacts with other transformational technologies

When people try to explain that artificial intelligence is already here since a long time in some form, they often refer to the algorithms that power Google’s search technology. Or an avalanche of apps on mobile devices. Strictly speaking, these algorithms are not the same as AI though.
The artificial intelligence market is estimated to grow from USD 419.7 Million in 2014 to USD 5.05 Billion by 2020, at a CAGR of 53.65% from 2015 to 2020. Source
Also think about speech recognition, for instance. Or identification technologies, product recommendations and even the electronic games we play. And of course there are many examples, depending on industry or function. Marketing, for instance, uses a bunch of platforms with forms of AI: from the sentiment analysis in social platforms to the predictive capabilities in data-driven marketing solutions.
So, artificial intelligence is many things. A graphic from research by Narrative Science shows various areas in the broader ecosystem of AI, ranging from text mining to deep learning and recommendation engines. The latter is something everyone knows or at least uses with recommendation engines being literally everywhere. And then there are all those apps such as Uber, Airbnb and the likes that connect you with respectively an Uber driver in the neighbourhood and an Airbnb place to stay – powered by AI.
Artificial intelligence is many things - research by Narrative Science shows various areas in the broader ecosystem of AI - image Narrative Science via InformationWeek
Artificial intelligence is many things – research by Narrative Science shows various areas in the broader ecosystem of AI – image: Narrative Science via InformationWeek
To understand the role and current wave of AI in today’s and tomorrow’s business and society context it’s important to look at the realities and technologies underneath the big overlapping umbrella term. It’s also important to see the current wave of artificial intelligence in a context of big data, unstructured data, integration and digital transformation.
One of the reasons why artificial intelligence – maybe not the term – has become so hot right now is the fact that it is a perfect fit for – and even indispensable enabler of – other technologies and the possibilities they offer. Sometimes you just need artificial intelligence techniques.

The interconnectedness of 3rd platform technologies and artificial intelligence

As we don’t feel the urge to reinvent the lists of technologies that enable and accelerate digital transformation and innovation, we’ll use IDC’s previously mentioned famous 3rd platform, although you can use many others.
Innovation accelerators – new core technologies as added by IDC to its 3rd Platform
Innovation accelerators – new core technologies as added by IDC to its 3rd Platform
The foundation of that so-called 3rd platform consists of 4 sets of technologies that are interconnected and de facto inherently connected with AI as well.
As a reminder: the high interconnectivity of technologies and processes in real-life applications is a core trait of what we’ve come to known as the digital transformation or DX economy.
Each of these sets of technologies (they are not things either but just as AI consist of several technologies and, more importantly, applications and consequences) are technological drivers of digital transformation as such.
On top of these 4 foundational sets or pillars (cloudmobilitysocial and Big Data/Analytics) come so-called innovation accelerators, the term we used before.
These are again various – and ever more – sets of technologies and technological innovations that drive digital transformation and all of them are inherently integrated with artificial intelligence and in reality some are even close to synonyms of AI.

Cognitive systems: an innovation accelerator

One of these innovation accelerators, as you can see in the image of the 3rd platform, are so-called cognitive systems technologies themselves.
Cognitive computing is really a term that has been popularized by mainly IBM to describe the current wave of artificial intelligence with a twist of purpose, adaptiveness, self-learning, contextuality and human interaction. Human is key in here and without a doubt also easier to digest than all those AI-related doomsday movie scenarios.
Essentially, cognitive systems analyze the huge amount of data which is created by connected devices (not just the Internet Of Things) with diagnostic, predictive and prescriptive analytics tools which observe, learn and offer insights, suggestions and even automated actions. As you probably know, a pillar of IBM’s view is IBM Watson as we’ll tackle below. The term ‘cognitive computing’ strictly speaking is a conundrum. Cognition, for instance, also includes the subconscious which is in fact a major part of cognition. Although this would bring us too far it needs to be said that IBM does make exaggerated claims about what its ‘cognitive’ platform Watson can do. Marketing indeed.

Cognitive computing and AI powering digital evolutions: from enabling IoT and making sense of unstructured big data to next stage security

Other innovation accelerators include the Internet of Things. Here as well, AI and cognitive computing or cognitive systems are omni-present.

AI and the Internet of Things

Once you start connecting everything you need APIs, connectors, information analysis technologies and “embedded intelligence”, essentially code that makes it all possible.
Moreover, the Internet of Things, which really is about automation and information (with on top of that a layer of possibilities to, for instance, enhance customer experience or make life a bit easier) adds loads of data, Big Data (one of the four pillars of the 3rd platform) to an already exploding digital data universe. The majority of all that data is unstructured and needs to be turned into knowledge and (automated) actions as good old rules-based information management approaches simply can’t handle it. Guess what is needed to make it possible and to even make all these other aspects of the Internet of Things possible? Indeed: artificial intelligence.

The future of security is intelligent too

We won’t cover all the other innovation accelerators except one: next generation security.
Do you remember that cybersecurity was – and often still is – seen as a range of “defensive” solutions and approaches (from strategies to technologies such as firewalls and anti-virus apps)? Well, that is changing.
Security is becoming more holistic, also looking at the human aspect and all elements in a changing security perimeter. But most of all: security is becoming more pro-active and technologies to predict cyberattacks before they even happen are in high demand. What do they use: indeed, artificial intelligence, not in the ‘big overlapping’ AI sense but in detecting patterns in data and acting upon this data.

Cognitive and the age of data and analytics

AI and cognitive aren’t just present in that innovation accelerator layer. It is, as said, also very present in the four pillars of the third platform that are driving and enabling digital transformationjust as they changed the ways we, businesses and consumers, behave, work and innovate.
We already mentioned Big Data in that context: ever more unstructured data. The solution: AI. Moreover Big Data as such isn’t the crux of the matter. For years we know that most of all Big Data Analytics matter. Turning data into outcomes knowledge, actions, insights etc. That analytics part is so important that IDC has called the Big Data pillar the Big Data/Analytics pillar. What is needed for these analytics? Indeed, again AI techniques. In fact, analytics is also what so-called cognitive systems are all about in a very high degree.

AI/cognitive and unstructured data/content

The picture is clear. But what does it all mean in practice? Let’s use data again.
In the end, the other pilars of the 3d platform and technologies driving digital transformation are a lot about data to. The cloud, mobility, social business and collaboration…
Unstructured and semi-structured data is fueling a renaissance in the handling and analysis of information, resulting in a new generation of tools and capabilities that promise to offer intelligent assistance, advice, and recommendations to consumers and knowledge workers around the world. IDC
We mentioned earlier how the data universe is exploding with unstructured data growing much and much faster than other data. This is, among others due to, mobile data traffic and the Internet of Things (see how it is all connected?).
This phenomenon isn’t new either and has been predicted since at least 2000. There are debates about the exact meaning of unstructured data and to what degree it is different from unstructured or semi-structured data. Simply said unstructured data is all the data you would get from IoT sensors, social media (again a link with one of the four pillars), text files and much more. Since several years it is estimated that 80 percent of data is unstructured and that percentage seems to grow as the volume of unstructured data keeps growing faster.
Various forms of data - unstructured data requires artificial intelligence to make business sense
Various forms of data – unstructured data requires artificial intelligence to make business sense
The typical thing with unstructured data is that it doesn’t have a predefined data model as you have with data sitting in a relational database, for instance. Unstructured data and content as such has no meaning or context because in principle we don’t know what it is.
It comes in many shapes and forms and from several sources and is often text-intensive. From paper documents that need to get digitized to Twitter messages or email, also a major source of unstructured data/content. And it’s here that – again – we see various artificial intelligence techniques such as Intelligent Document Recognition or IDR, text mining, self-learning knowledge base technology, machine learning, natural language processing and the whole cognitive computing aspect come into the picture.
In fact, if you look at the page of IBM’s famous Watson platform you’ll read that, qote, “IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data”. In an information management context, we find artificial intelligence in, among others, the mentioned IDR applications, self-learning systems for customer service information, information routing processes, predictive analytics and automated processes such as automated loan application classification.

The value of artificial intelligence – conclusion and next steps

Artificial intelligence is – and will be – critical for many technological and business evolutions. And, yes, it is one of many enablers of digital transformation.
Should we debate how far we go with it? Yes. But we really need to know what we are talking about.You can learn more about that in our article on the debates regarding AI, its dangers and the future of humanity, essentially revolving around superinterintelligence.
Expect further articles, including a dive into the past, presence and future of AI/cognitive – and the various applications and “forms” of AI.
Because, as said artificial intelligence is not a thing. Just looking at one context where AI and cognitive are used, Intelligent Document Recognition, there are several forms of artifical intelligence such as semantic understanding, statistical clustering and classification algorithms such as SVM, Bayes and Neural-Net, as Roland Simonis explained in part three of a blog series for AIIM, reposted here, where he tackles how AI helps solve the information and Big Data challenge.
AI - Intelligent Document Recognition algorithms - source
AI – Intelligent Document Recognition algorithms
For now, let’s say it’s clear there is no harm in an algorithm enabling people to find something better (in fact, if you look at how poor search intelligence still is, we’d love to so far more intelligence in it) and there is no harm in having a system that helps you process and understand information faster and better to improve anything worth improving such as customer service (with a growing usage of IDR applications and Knowledge Base technology) and , cybersecurity or people’s health, to name just a few.
But artificial intelligence, as a “whole”, is not as far as we tend to believe.

The waves of artificial intelligence: from concept and research to business reality and ethical discussion

John McCarthy by WikiPedia user Geejo – CC-BY-SA-2.0
John McCarthy by WikiPedia user Geejo – CC-BY-SA-2.0
Although some look further in the past to look at the birth of AI, the 1950s was really when the first wave started. One of the founders of artificial intelligence as a concept was US computer scientist and cognitive scientist Dr. John McCarthy. He is believed to also have coined the term and defined artificial intelligence as “the science and engineering of making intelligent machines”. After a conference in 1956, where McCarthy was present, the first wave really took off, mainly focusing on research. On top of McCarthy, many of these researchers became household names in those early research days and today still are. Among them: Marvin Minsky, Herbert Simon and Allenn Newell, to name a few.
McCarthy’s ideas and those of his peers, as well as years of research and initial developments next led to the second wave of artificial intelligence in the 1980s, mainly due to the success of expert systems, among others strengthened by the rise of the PC and the client-server model.
A third wave took place at the end of the nineties and early 2000 when there was more attention from the perspective of specific applications of AI across diverse domains. On top of that, there was the success of the Internet, which also led to quite some hype and predictions that didn’t really live up to their promises. The spreading availability and usage of the Internet did cause a stir. In those days we worked for a publishing house, as publishers, and one of the company’s major magazines, Inside Internet, had several pieces from AI researchers from various universities where real-life applications were tried. Artificial intelligence was again debated a lot and became popular, also in globally launched magazines in those days such as Wired.
Unfortunately, the hype was big again too. It’s also in those days that the convergence of man and machine became increasing popular(ized). In 1999, for instance, we had the chance to interview Joël de Rosnay who had published a book of a nascent global super organism, the cybiont, which would know a “symbiotic mankind” in a connected ecosystem of humans, technology and everything really. It does sound familiar now, doesn’t it? More about de Rosnay’s and views – and those of others – that show how increasing interconnectedness was seen as the big promise back then in this article.
Today’s artificial intelligence wave is one of rapid adoption of AI technologies in new applications, driven by, among others the mentioned 3rd platform technologies, including the cloud, faster processing capabilities, scalability, Big Data, the push of various companies in a space where technologies continue to be refined across several applications and industries (self-driving cars, robotics, the Internet of Things, the rise of chatbots and more) and, last but not least, market demand for smart and intelligent technologies to leverage the potential of new technologies, information and digital transformation.
Whether this wave will lead to true and continuing business momentum however remains to be seen despite “good signs”, as is the next wave and the increasing number of dicussions on the “ethics”, security and “place” of AI in the future. The “AI and robots taking over mankind view” and superintelligence evolutions – instead of AI as mimicking possibilities of the human brain for a purpose – are real concerns and deserve attention.
It’s clear that artificial intelligence is indeed not new but has changed a lot and gains more attention than ever. It’s becoming ubiquitous and transforms the way we work, live and do business. Along with robotics (and phenomena such as 3D printing, the Internet of Things, etc.), artificial intelligence is again an increasingly debated topic. Still, this wave is not the last one, it is even very similar in many regards to the previous one and the hype is loud.

Intelligence and artificial intelligence – what’s in a name?

There are many definitions of artificial intelligence, just as there are many definitions of intelligence.
The Encyclopædia Britannica defines artificial intelligence as, quote, “the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience”. It goes on defining what intelligence means, a phenomenon that has fascinated us since ancient times and now, more than ever, is discussed in these digital times where the pace at which AI gets used is growing fast. There are different technologies that get ranked as artificial intelligence and different types of AI. The article gives a good overview of the history of AI and touches upon topics such as reasoning, perception and problem solving. The growth of artificial intelligence is not linear but exponential and therefore it’s good to look at all the domains where it’s used and can be used in an open debate.