Future Projects

The Intelligent Virtual Assistant will be the result of the convergence of three technologies: Avatars, Thinking Machines and Virtual Humans. There is a significant amount of work being done in all three areas.

  • Thinking Machines – There is really exciting work being done at Numenta. They are developing smart software that mimics the human brain which is the brain child of Jeff Hawkins. In the article, Jeff Hawkins and the Human Brain, Business 2.0, January/February 2007, he states: “What Nementa is doing is more fundamentally important to society that the personal computer and the rise of the internet.” This is technology worth keeping following. For further information, I would recommend the book by Jeff Hawkins with Sandra Blakeslee, On Intelligence.
  • Virtual Humans - So what is a Virtual Human? There may be two answers to that. One is what they are today and the second is what they will become. Peter Plantec wrote the excellent book Virtual Humans, which gives us good insight into what they are and what they will become. The following is a few key points from his book.
  • “In my estimation, no force today will impact the future of mankind as much as thinking machines and virtual humans.” (ref. p. 3)
  • “Virtual humans are animated characters that emulate human behavior and communication. They’re not artificially intelligent, but instead they use Natural Language Processing (NLP) to fake real intelligence better that the best Artificial Intelligence (AI) programs. Better than that, they can emulate a range of human behaviors nicely. Academically they’re known as embodied conversational agents or ECAs.” (ref. p. 3)
  • “It’s inevitable that V-people will become the universal interface.” (ref. p.3)
  • “In simplest terms a virtual human is an intelligent computer simulation of human personality.” (ref. p. 4)
  • “There’re too much about clever technology and not enough about human personality.” (ref. p. 4)
  • “After all, consciousness is the illusion we want to project.” (ref. p. 4)
  • “Virtual Humans are great communicators. They will talk among themselves at light speed. It’s likely that they could form a hive mind in which all the knowledge of a billion virtual humans forms a gestalt that no single human could achieve. We could benefit from this greatly or we could be victimized by it.” (ref. p. 12-13)
  • “Virtual humans are our best shot at a truly universal man/technology interface.” (ref. p.13)
  • “Ultimately we have to be as comfortable with them as we are with other humans. That’s when the bridge will be most functional, opening up the world of technology to nearly everyone.” (ref. p. 97)
  • “Eventually, Virtual Humans will be our gateway to everything.” (ref. p. 199)

Today’s Virtual Humans (talking heads) have their benefits and their drawbacks. The technology is fragmented and requires a good bit of technical know how and time to assemble. However, today’s Avatars may transition into tomorrows Virtual Humans, those that can act independently on our behalf.

The object is to have an Intelligent Virtual Assistant that could always be in touch with its real life companion, be it over the cell phone, the Internet or even the telephone. The market for this would be tremendous. It would be like having a friend to talk to anytime. So what would it take to accomplish that? The following is a list of some of the requirements:

  • Self-Evolving
  • Self-Redefine
  • Self-Replicating
  • Emulate Fuzzy Human Process & Consciousness Behavior
  • Natural Network Face-Recognition
  • Real-Time Flowing Hair & Smooth Movements
  • Sophisticated Head Language
  • Set of Animated Hands
  • Track Where the User is Looking to Ensure Eye Contact
  • Face Tracking Eye & Face Recognition
  • Develop Technology that will Interpret Experience and Automatically Integrate it into the V-person’s Personality
  • Hardware-Assisted Voice Interactive Technology
  • Laughter
  • Self Learning
  • Full-Body Character
  • Sense & Communicate with People
Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-Share Alike 2.5 License.