When the term “artificial intelligence” – better known as “AI” – was initially coined, it was thought that humans (carbon based life forms) had “real” intelligence while the best a machine’s intelligence (to the extent they had any) could get was “artificial”. As I work with companies that are leading major machine learning, algorithms, and AI initiatives I’m convinced that we’re ushering in a new golden age of AI but one that might need some terminology refinements. IBM, one of the leaders in AI’s new golden age, talks about how cognitive computing will take us to the promised land where machines aren’t just augmenting our calculation skills but really recognize patterns without being taught by humans, sift through data without us teaching them, digitize our experiences without our involvement, strike up conversations, drive our cars, and make decisions just like humans do.
With the rapid progress that we’ve been making, much of which I’ll get to see first-hand when I attend the next week, I’ve been wondering whether we should move away from the term artificial intelligence to just silicon intelligence. We’re carbon-based so our intelligence could be called carbon Intelligence (CI) not “real intelligence”. Once a machine is intelligent and can do things equal to or exceeding our own abilities, should that intelligence really be considered “artificial” or just “silicon-based intelligence” (SI)?
While this might seem I’m being a little tongue-in-cheek, I think that as machine or silicon intelligence (SI) becomes just as real as our own (meaning indistinguishable from us), the statutory and regulatory structures plus social contracts we have in place will need to be expanded to include SI. Our existing laws that guard what humans can do with and to each other have been crafted and honed over thousands of years and take into account carbon-based intelligence quite well.
Our belief with old-style AI has been that humans would program all variations, without cognition, and that any errors that a machine makes would be have been programmed by a human into software or hardware. This might sound like science fiction, but once machines can become intelligent they can make errors on their own, without it being a human-created programming error.
Once our silicon machines start making intelligence-inspired errors in judgement, what will we do? As long as we think that machine intelligence or SI is somehow less real or “artificial” we’ll end up ignoring safety or reliability concerns because we won’t know who to blame (programmers or the machines themselves). I don’t think we have to worry about machines taking over and subjugating us anytime soon but I think IBM’s vision of cognitive computing leads to types of AI (SI?) that we’ve never seen and have never had to regulate or control.
I look forward to seeing the AI experts next week at #IBMWoW – and asking them about carbon intelligence, silicon intelligence, what’s real, what’s artificial, and how cognitive computing will take us to the promised land. Follow me on Twitter (@shahidnshah) so we can learn together. I’ll be using the hashtag #IBMwow. Stay tuned!