Modelling artificial intelligence on the human brain – Jeff Hawkins

by stronged

Hitchcock Lecture, UC Berkeley, October 3, 2012

Interesting points of this talk are:

  • Alan Turing – computers can do anything? AI
  • AI (task specific, programmed, limited learning, knowledge representation is difficult) = no neuroscience
  • Warren McCulloch and Walter Pitts – neurons as logic gates – We can think about neurons in brains like processes in computer
  • Artificial neural networks – limited capabilities (still reliant on programming)
  • The Human Brain Project – modelling a brain and it’s structures
  • Neurocortex is a predictive modelling system – Builds a map by streaming data – geared towards predictions, anomaly detection, and actions (continuous learning and adaptation)
  • Machines must have senses that evaluate information through time in order to catalogue/archive information and interact
  • Artificial memory must be built hierarchical
  • Cannot separate inference from behaviour – sensory perception from action
  • How we interact with the world is a process of focusing on one thing and taking another out – constantly. Similar to FLOW theory.
  • ‘Dense representations’ – bit representation in computers (1’s and 0’s)
  • ‘Sparse representations’ – selective representations of information (the pieces of information that best represents this information semantically)
  • Variable order sequence memory
  • GROK software predicts spatial and temporal sequences of information
  • Only dangerous AI is self-replicating machines
  • Intelligence without emotions? Emotions, physiologically, are not essential to intelligence, they are a switch to make your neurocortex remember a particular thing – in short, Hawkins believes emotions are a system that prioritises what you believe you should learn.
Advertisements