Kanye West and Ben Horowitz
(Source: Flickr / jeremycastillo)
In last week’s post, I discussed the history, future and applications of machine learning. This week, I would like delve deeper into machine learning and explore the processes of ML, as well as the different types of ML algorithms and their applications in industry.
Supervised learning is the first type of learning algorithm I’ll cover. A data set consisting of a feature vector (input) and a supervisory signal (output), which is essentially either another feature vector or a label, is fed into the learning algorithm. The supervised learning algorithm will then analyze the data, known as the training data, and produce an inferred function, which can be used for mapping new examples. The feature vector can range from the sale price of homes in a particular location or the age, weight, or height of individual patients. In the case of the homes, the other supervisory signal may be square footage of the home and based on the data supplied, the learning algorithm will be able to predict the sale price of a home when given its square footage in a particular location. When the data set uses a supervisory signal in the form of a vector, the learning algorithm can use a least squares regression line (informally known as the best-fit line) to predict the future with the help of the training data. Otherwise, if the supervisory signal is a discrete variable, the learning algorithm uses classification to perhaps distinguish a malignant tumor from a benign tumor using the size of the tumor, which would be the input into the algorithm.
Unsupervised learning, on the other hand, serves to find structure in unlabeled data. The training data supplied to the learning algorithm is unlabeled, and thus, there is no error or reward signal to evaluate a potential solution. Given the training data, the algorithm will utilize unsupervised learning techniques such as clustering or blind signal separation to find intrinsic structure in the data. Once the unsupervised learning algorithm is able to find structure in the data, it can use what it has learned about the relationship between the variables to classify future data into groups.
The clustering technique is particularly applicable in computer vision within robotics. Clustering refers to the grouping of objects according to their properties in order to compare to other clusters and determine structure in that manner. In computer vision, the unsupervised learning algorithm may use clustering to group pixels of image together into low-level features, which are then used as inputs into ML algorithms. By doing so, the learning algorithm can identify the subject of a set of images and categorize them accordingly.
The other noteworthy unsupervised machine learning technique is known as blind signal separation. One of the most common methods of blind signal separation is independent component analysis (ICA). As the name suggests, ICA serves to isolate individual components of source signals from a set of mixed source signals. The classical example of a source separation problem is known as the cocktail party problem, where there are a number of conversations being held in a crowded setting. The ICA algorithm can separate the conversations taking place so one can hear what each individual is saying without any interruption from the other conversations. Below is the ICA algorithm developed by Samuel Rowels at the University of Toronto in MatLab. It is important to note however researchers spent a number of years developing this one line of code.
Unsupervised learning algorithms tend to be more applicable than supervised learning algorithms because rarely in reality do researchers have conveniently labeled data. These unsupervised learning algorithms can serve to analyze data in virtually any field of study, ranging from astronomy to biomedicine and ecology. These algorithms have applications in industry as well, in marketing and telecommunications as well as fraud detection. Unsupervised learning can help to analyze social networks and recognize clusters of friends within a larger group of individuals or conduct sequence analysis in genetics. It can help us with neuroimaging by clustering certain brain tissue or recognize segmentation in markets and help companies devise more targeted advertising campaigns.
Another central concept in machine learning is that of reinforcement learning. Derived from psychologist B.F. Skinner’s theory of operant learning, in which the consequence of a particular action changes the likelihood of that action being repeated, reinforcement learning uses those very principles to program a system to perform a certain task. For instance, similar to how we train our pets to “sit” or “stay” on command with the help of treats (reinforcement), researchers supply the reinforcement learning algorithm with samples to specify which behavior is desirable and which is not and reinforce the system’s behavior with the help of reinforcement signals. And since the system interacts with its surrounds in discrete time intervals, the system relies on scalar immediate rewards associated to the system’s last transition to reinforce the agent’s behavior. At one specific moment in time, the reinforcement learning agent will gather an observation about its environment, pursue a particular course of action, and when the environment of the agent changes, the reinforcement signal is determined. Below is a video produced by Andrew Ng’s A.I. Lab at Stanford University utilizing reinforcement learning and apprenticeship learning to fly an autonomous helicopter. The stunts performed in the video are virtually impossible, not to mention incredibly laborious, to program without the help of machine learning.
Whether its with the help of supervised, unsupervised, or even reinforcement learning algorithms, its clear much of the progress towards strong A.I. is the result of the advancement in machine learning. The most remarkable aspect of the progress in the machine learning is its potential. Data is gradually becoming the new science, and machine learning holds the answers to tomorrow’s most pressing questions.
In 1952, Arthur Samuel, while working at IBM, developed a computer program designed to play a game of checkers against itself. Samuel had the computer program, now known as the Samuel-Checkers playing Program, play thousands of games of checkers against itself, and over time, the computer program was able to recognize board positions and patterns which increased the likelihood of winning and other positions or patterns which increased the likelihood of losing. And soon enough, the Samuel-Checkers playing program was able to play checkers better than Arthur Samuel himself.
With the Samuel-Checkers playing Program, Arthur Samuel develop the very first self-learning computer program. Prior to Samuel’s work, the general presumption in computer science was a computer could not perform any sort of task without being explicitly programmed. Samuel effectively disproved this assumption, and coined the term “machine learning, ” which he define as “a field of study that gives computer the ability without being explicitly programmed.” Tom Mitchell, in 1998, offered a more formal, scientific definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In other words, a computer program is said to be self-learning if it can predict a specific outcome with increasing accuracy over time when provided with a data set.
Despite early breakthroughs in machine learning, its applications were limited until relatively recently. The recent popularity of machine learning in the AI community is partially due to society’s skepticism and pessimistic outlook in regards to the future of AI. After nearly 50 years of debate on the plausibility of intelligent systems in the future, the general public lost faith in the potential of artificial intelligence. As a result, when researchers were able to achieve some success in machine learning, it immediately became the central focus of AI. And unlike the rest of AI which relies primarily on heuristics and logic rather than data analysis, machine learning deals with concrete, algorithmic problems with clear measures of progress. The problems in much of AI tend to be ill-defined and ambiguous, whereas ML tends to deal with individual, well-defined problems that can be solved with algorithms—often better than humans can. Another fundamental difference between much of the work being done in AI and the work being done in ML is researchers in AI dedicate their time to programming computers to perform mechanical functions whereas researchers in ML dedicate their time to programming computers to teach themselves. The former is incredibly laborious and requires repeated iteration on the part of the programmer due to innumerable variables at play in reality which aren’t always considered in the lab. The latter, on the hand, is just as if not more laborious but the process of iteration could now be performed by the program itself without any assistance from a programmer. As a result of these factors, machine learning has attracted a lot of attention in the AI community and has become the central focus of much of the AI research being done at the moment.
Before we delve further into ML, its important to explore the philosophical and theoretical underpinnings of machine learning. Machine learning is based on the theory that human intelligence stems from a single algorithm. It can be summarized as the process in which one provides a system with significant amounts of data so the system can learn from data by exploring relationships and recognizing patterns between different data variables. The actual learning component of machine learning is done by complex learning algorithms, which are representative of neural networks in the human brain. These artificial neural networks programmed into the system gather information about the given data sets, extrapolate the data to predict future occurrences, and make logical decisions. The learning algorithms are designed to emulate the human brain by incorporating the principles of the brain such as hierarchical organization and sparse coding principles. As a result, self-learning systems learn in a very similar manner to humans. Our primary method of learning is through a process of trial and error and continuous process of successive approximations. We formulate a theory about a particular task, and based on the result of the first approximation, we adjust our theory accordingly and iterate our approach until we achieve the optimal result. ML algorithms work in a similar manner. As the system gathers more and more data and perform more and more simulations over time, the system will be able to refine its theory and predict the relationship between variables with increasing accuracy.
Machine learning is not without its faults however. Modifying raw data so it can be translated into a computational model or algorithm is very time-consuming. Additionally, at this stage in machine learning, learning algorithms are not yet capable of replicating artificial general intelligence (human intelligence), meaning computer systems cannot learn at quite the breadth humans are capable of. Self-learning systems (for now) are limited to one singular task, and thus are limited to learning about one thing and one thing only. For instance, the learning algorithm that helps Google order the search results of its users cannot predict future stock market prices or recognize patterns in the housing market.
Despite these criticisms, ML remains the most promising branch of AI and after over 50 years since Arthur Samuel developed the first self-learning machine, the potential applications of machine learning continue to increase at an exponential rate. Machine learning has become so pervasive today that we often use it without even being aware of it. The spam filter in our email accounts use ML algorithms to differentiate spam from other email messages. Google’s search engine relies on ML algorithms to rank the results of your search according to keywords and your previous search history. The voice recognition systems in our iPhones and Androids depend on machine learning to decode the commands of the user. Facebook’s automatic face recognition system is another application of machine learning. Machine learning has even managed to help us gain a significantly better understanding of the human genome and made self-driving cars possible. Its applications range from DNA sequencing to enterprise solutions and stock market analysis. However, the most revolutionary applications of machine learning have yet to come. The possibilities are endless once machines are capable of artificial general intelligence and machine learning enters the phase of strong AI. Machine learning has the potential of creating a radically different future, more so than any other field of study.
In the 1950’s and 1960’s, most people had an optimistic perspective of the future, particularly in respect to technology. With shows and movies like “The Jetsons” and “Star Trek,” many Americans believed outer space travel would become commonplace in the near future. A number of ambitious ideas captured the public’s imagination, including the idea of robots and other intelligent machines. At the time, artificial intelligence was still a relatively new but nonetheless heavily explored field of study. There was a general consensus among the public that AI was the way of the future. The Department of Defense poured millions of dollars into AI research and a number of AI labs were established across the United States and the rest of the world by the mid 1960’s. Much of the research conducted in these AI labs focused exclusively on programming computers. The general public believed the rate of acceleration in AI would rapidly increase over time, and as a result, man would gradually become more machine-like. They fantasized about a future in which rather than spending countless hours trying to learn a piece of information, data could be simply be downloaded into one’s brain. Instead, 50 years later, the trajectory of AI has evolved to make machines more human-like. Rather than focusing exclusively on computer science like researchers did in the 50’s and 60’s, the field of AI now incorporates aspects of neuroscience in hopes of building machines and software that model the cognitive processes of the human brain.
The emergence of new fields such as cognitive science as well as theoretical and computational neuroscience is evidence of this shift within AI to develop more human-like systems. Prior to the 1950’s, the fields of neuroscience and computer science remained entirely separate disciplines. However, AI researchers soon recognized the value of utilizing neuroscience in conjunction with computer science to build AI systems. By developing mathematical model and algorithms to reflect the cognitive processes of the brain, not only would neuroscientists have a theoretical framework to better understand neuroscience but they could also replicate these models in AI machines and software.
AI startups are doing much of the same, focusing their efforts on understanding the principles of the human brain in hopes of incorporating these very elements to create intelligent machines. By employing principles such as hierarchical organization and sparse coding principles in AI systems rather than programming a computer to perform certain mechanical functions, startups such as Vicarious and Prior Knowledge are adopting the human brain as a model for AI systems, and developing increasingly human-like systems as a result. This shift to build more human-like machines indicates a larger trend towards sentient AI, a type of artificial intelligence in which systems are conscious and self-aware. The increasing push to develop technologies equipped with multimodal intelligence and capable of replicating human emotion is a byproduct of the shift towards sentient machines. The implicit notion behind sentient AI, however, is that human intelligence is the pinnacle of artificial intelligence - which is a false assumption.
The shift towards sentient AI raises a number of ethical dilemmas, namely the question of whether artificial intelligence can exceed human intelligence and what happens if it does? First and foremost, the argument that human intelligence is the pinnacle of artificial intelligence is false. Unlike human intelligence, AI is not limited to the gradual iteration and incremental improvements that is evolution, and since evolution operates on a generational time horizon, AI can evolve at a considerably faster rate than human intelligence. Given this potential magnitude of advantage of artificial intelligence in comparison to human intelligence, the concern of a Skynet type of scenario is even more valid.
Whether the trend towards sentient artificial intelligence is for better or worse is debatable. While utilizing the principles of neuroscience may help to develop more intelligent machines than ever before, it signals a lack of faith in artificial intelligence on the part of society. It reflects a sense of pessimism towards the future of AI, suggesting human intelligence is as far as we can go. As a result, AI has become the contrarian option in comparison to biotechnology, presenting unprecedented opportunity to advance the human race and create a radically different future.
People have almost always had reservations about new technologies and their place in social interactions. Too many people, however, are downplaying Google Glass’ potential on the grounds that it makes social situations awkward.
I agree that it can be flustering to see the person you’re having a conversation with twitch her eyeballs between you and the corner of her glasses. However, complaining about how Glass ruins the integrity of your interactions makes you of the same class of 19th century citizens who complained about the devilishness of the telephone. Their criticisms were remarkably similar — too little privacy, too inappropriate, too suspicious.
The critics seemed to have forgotten that Glass is still at the beginning of its evolution as a product. Naturally, there will be social mishaps and misgivings. Some people even might be offended by the mere utterance of “O.K. Glass” in their presence. However, over time, their pretension will fatigue. New norms will be decided and ultimately, if Glass is as utilitarian as its backers expect it to be, we will adapt, tolerate and eventually accept it as a fixture in our lives.
EDIT: Moments after I published this post, Gary Shteyngart’s piece in the New Yorker discussing his East Coast travels with Glass appeared in my feed. Highly recommended.
It’s often said neuroscience is “data-rich, but theory-poor.” Francis Crick, famous for his part in discovering DNA, went even further, stating neuroscience lacked a theoretical framework entirely. In the words of Thomas Kuhn, neuroscience remains in its pre-paradigm stage. Despite the tremendous progress in the study of cognition, the question has to be asked: “Why do we not have a good brain theory yet?”
While every neuroscientist will attest to the lack of a theoretical framework in neuroscience, the reasons as to why vary. Some argue we still need more research and more data in order to develop an adequate brain theory. This reasoning suggests there is still not quite enough information about the anatomy and physiology of the brain in order to propose a broad theoretical framework. However, we just noted the vast amount of data about the brain available to man, so it’s unlikely continued research will suddenly result in neuroscience’s first paradigm. It’s improbable to assume another 20 years of data collection will eventually result in a theoretical framework that helps to interpret and integrate a myriad of observational and experimental data into a comprehensive theory when the past 20 years of research have failed to suggest any sort of theory.
Jeff Hawkins, founder of the Redwood Institute for Theoretical Neuroscience, suggests there is another reason as to why we have yet to develop a theoretical framework to understand existing data. Hawkins claims an “intuitive, strongly-held, but incorrect assumption” has blinded neuroscientists in their efforts to develop a working theory of the brain. Hawkins states this intuitive yet incorrect assumption is that intelligence is defined by behavior. Instead, Hawkins suggests intelligence is defined by prediction. As Hawkins points out, IQ testing relies heavily on prediction, testing one’s ability to predict the next number in a sequence and recognize and recall relationships. This simple misunderstanding of how intelligence is judged, Hawkins argues, has blinded neuroscientists and researchers in their efforts to propose such a framework.
Now that we have an understanding as to why we don’t have a good brain theory yet, we have yet to answer the question: “Why is having a good brain theory necessary?” First and foremost, developing a strong theoretical framework is necessary to interpret much of the still unexplained data in neuroscience, similar to what Copernicus’ heliocentric theory did for our understanding of the solar system or what Darwin’s theory of natural selection did for our understanding of evolution. It is important to note Copernicus’ and Darwin’s respective theories were not the first paradigms in their respective fields. Both the study of the solar system and evolution experienced numerous paradigm shifts and it was only after a series of successive iterations were we able to discover the truth about each Conversely, neuroscience has yet to have its first paradigm. Interestingly enough, the framework of ideas Copernicus and Darwin proposed were rooted in mathematical models and empirical evidence, methods neuroscience has only recently embraced.
Perhaps even more important than helping to interpret existing data, science can sometimes tell us something about ourselves. In this case, it can tell us about who we are. It can tell us about how we perceive our surroundings and about how we learn. It can offer an explanation as to why we think and behave the way we do. And every so often science contributes to the advancement of the human race. It helps society move forward and further the range of possibilities for humanity. Artificial intelligence, the primary application of the research in theoretical and computational neuroscience, is capable of such scale and impact. The use of intelligent machines and supercomputers has the potential to not only transform our lives but broaden our intellectual horizons and help create a more intelligent tomorrow.
It was September 11th, 1956. It was the second day of the Symposium on Information Technology at the Massachusetts Institute of Technology. Allen Newell and Herbert Simon, both pioneers in the field of computer science, along with the Noam Chomsky and George Miller, leaders in their respective fields of linguistics and psychology, were all present at the seminar. As Miller notes following the symposium, he was convinced “experimental psychology, theoretical linguistics, and the computer simulation of cognitive processes were all pieces from a larger whole.” And thus, the intellectual roots of what later became known as “cognitive science” were planted.
The study of cognitive processes was nothing new however. The brain, in the days of the Plato and Aristotle, was studied by philosophers rather than neuroscientists. The emergence of experimental psychology, beginning with Wilhelm Wundt in Germany in the 19th century, forced the study of cognitive processes to rely less on theory and more on empirical evidence. Significant advancements in brain imaging technology helped continue this trend throughout the 20th and into the 21st century. Rather than focusing solely on cognitive processes like cognitive psychologists had been doing, neuroimaging techniques such as MRI’s, CT’s, and EEG’s allowed neuroscientists to directly study the physiology and anatomy of the human brain. And over time, as the computational power of computers grew, neuroscientists were able to use computer simulation to pose hypotheses to later test in their research experiments. The overlap between neuroscience and computer science has helped researchers delve further into concepts such as neural modeling, brain theory, and neural networks.
At the intersection of these aforementioned fields of study stands cognitive science. Cognitive science however is not simply a summation of these individual disciplines. Each field poses its own perspective and serves a unique purpose in the field of cognitive science, each helping to develop a clearer understanding of the human brain. For instance, while both rely on a combination of observational and experimental research as well as a bit of theory, the neuroscience component of cognitive science focuses on the physiology of the brain whereas cognitive psychology focuses on cognitive operations. The philosophy and computer science components serve to provide a theoretical framework for the work done in the lab by the psychologists and neuroscientists. Linguists and anthropologists operate primarily on the theoretical portion of the spectrum, identifying grammatical structures in languages and examining the effect of cultural setting on cognition.
Unlike other disciplines which similarly study cognition, the cognitive science’s use of computation and computer simulation makes it especially applicable to artificial intelligence and machine learning. Not only has the introduction of computer science to study of cognition provided a theoretical framework to understand the brain, it helps scientists to compare and contrast human and artificial intelligence. This in turn helps to develop machines more capable of being intelligent than artificial.
Though a relatively young discipline, cognitive science has witnessed tremendous progress and has quickly developed its institutional profile, leading to the creation of cognitive science departments and funding of research labs in nearly every university in the country. Despite this progress, cognitive science remains an under-explored field of study with a myriad of unsolved mysteries and discoveries that will undoubtedly lead to a smarter, more intelligent tomorrow.
— William James in Principles of Psychology (1890)
- Lovaash Online Stores
We’re busy building our Lovaash Online Stores. Earlier this week, our Lead Developer Daud showed me the specs behind the...
- People Over Page Views
So many people begin blogging, thirsting for page views and the glory of the HackerNews front page, only to be...
- Your Opportunistic Grief Implies That You're an Asshole
Update: I’ve learned some lessons.
I should’ve included some sort of thesis for this...
- Thought via Path
“Live a good life. If there are gods and they are just, then they will not care how devout you have been, but will welcome you based...