August 18, 2014

Worked from the woods today. #monday

9:02pm  |   URL: http://tmblr.co/ZYbHXx1Od-IAR
Filed under: monday 
July 17, 2014

Take the Plunge by (eb78)

Take the Plunge by (eb78)

(Source: r2--d2, via theclassyissue)

June 22, 2014
XOXO #weekender

XOXO #weekender

9:54am  |   URL: http://tmblr.co/ZYbHXx1JQUm4z
Filed under: weekender 
June 8, 2014
From the last adventure. #symmetry

From the last adventure. #symmetry

12:32pm  |   URL: http://tmblr.co/ZYbHXx1I941-H
  
Filed under: symmetry 
June 4, 2014
Easier to get work done when it’s just you and the pilot. #suicidesontheprivatejet

Easier to get work done when it’s just you and the pilot. #suicidesontheprivatejet

February 11, 2014
Youngins

Youngins

(Source: acityofpearls)

January 29, 2014
A year of irreversible decisions begins. #googleglass (at Google San Francisco)

A year of irreversible decisions begins. #googleglass (at Google San Francisco)

2:23pm  |   URL: http://tmblr.co/ZYbHXx15rF_hi
  
Filed under: googleglass 
January 13, 2014

December 21, 2013
Waterfall, by M.C. Escher (1961)

Waterfall, by M.C. Escher (1961)

12:57am  |   URL: http://tmblr.co/ZYbHXx11qEUxD
  
Filed under: art escher 
December 18, 2013
Notes on the respiratory system. #davinci #art (at Kensington Library)

Notes on the respiratory system. #davinci #art (at Kensington Library)

8:05pm  |   URL: http://tmblr.co/ZYbHXx11dd6k1
Filed under: art davinci 
August 25, 2013

(Source: GQ)

1:24am  |   URL: http://tmblr.co/ZYbHXxtDEfMG
  
Filed under: muhammad ali style 
August 16, 2013
Kanye West and Ben Horowitz

Kanye West and Ben Horowitz

(Source: Flickr / jeremycastillo)

August 13, 2013
The Algorithms behind ML

In last week’s post, I discussed the history, future and applications of machine learning. This week, I would like delve deeper into machine learning and explore the processes of ML, as well as the different types of ML algorithms and their applications in industry.

Supervised learning is the first type of learning algorithm I’ll cover. A data set consisting of a feature vector (input) and a supervisory signal (output), which is essentially either another feature vector or a label, is fed into the learning algorithm.  The supervised learning algorithm will then analyze the data, known as the training data,  and produce an inferred function, which can be used for mapping new examples. The feature vector can range from the sale price of homes in a particular location or the age, weight, or height of individual patients.  In the case of the homes, the other supervisory signal may be square footage of the home and based on the data supplied, the learning algorithm will be able to predict the sale price of a home when given its square footage in a particular location.  When the data set uses a supervisory signal in the form of a vector, the learning algorithm can use a least squares regression line (informally known as the best-fit line) to predict the future with the help of the training data. Otherwise, if the supervisory signal is a discrete variable, the learning algorithm uses classification to perhaps distinguish a malignant tumor from a benign tumor using the size of the tumor, which would be the input into the algorithm. 

     image                                           image

                               Regression                                                                                                                                     Classification  

                           

Unsupervised learning, on the other hand, serves to find structure in unlabeled data. The training data supplied to the learning algorithm is unlabeled, and thus, there is no error or reward signal to evaluate a potential solution. Given the training data, the algorithm will utilize unsupervised learning techniques such as clustering or blind signal separation to find intrinsic structure in the data. Once the unsupervised learning algorithm is able to find structure in the data, it can use what it has learned about the relationship between the variables to classify future data into groups. 

The clustering technique is particularly applicable in computer vision within robotics. Clustering refers to the grouping of objects according to their properties in order to compare to other clusters and determine structure in that manner. In computer vision, the unsupervised learning algorithm may use clustering to group pixels of image together into low-level features, which are then used as inputs into ML algorithms. By doing so, the learning algorithm can identify the subject of a set of images and categorize them accordingly.  

                                                                                                                                                                                        image

The other noteworthy unsupervised machine learning technique is known as blind signal separation. One of the most common methods of blind signal separation is independent component analysis (ICA). As the name suggests, ICA serves to isolate individual components of source signals from a set of mixed source signals. The classical example of a source separation problem is known as the cocktail party problem, where there are a number of conversations being held in a crowded setting. The ICA algorithm can separate the conversations taking place so one can hear what each individual is saying without any interruption from the other conversations. Below is the ICA algorithm developed by Samuel Rowels at the University of Toronto in MatLab. It is important to note however researchers spent a number of years developing this one line of code.


image

Unsupervised learning algorithms tend to be more applicable than supervised learning algorithms because rarely in reality do researchers have conveniently labeled data. These unsupervised learning algorithms can serve to analyze data in virtually any field of study, ranging from astronomy to biomedicine and ecology. These algorithms have applications in industry as well, in marketing and telecommunications as well as fraud detection. Unsupervised learning can help to analyze social networks and recognize clusters of friends within a larger group of individuals or conduct sequence analysis in genetics. It can help us with neuroimaging by clustering certain brain tissue or recognize segmentation in markets and help companies devise more targeted advertising campaigns.

Another central concept in machine learning is that of reinforcement learning. Derived from psychologist B.F. Skinner’s theory of operant learning, in which the consequence of a particular action changes the likelihood of that action being repeated, reinforcement learning uses those very principles to program a system to perform a certain task. For instance, similar to how we train our pets to “sit” or “stay” on command with the help of treats (reinforcement), researchers supply the reinforcement learning algorithm with samples to specify which behavior is desirable and which is not and reinforce the system’s behavior with the help of reinforcement signals. And since the system interacts with its surrounds in discrete time intervals, the system relies on scalar immediate rewards associated to the system’s last transition to reinforce the agent’s behavior. At one specific moment in time, the reinforcement learning agent will gather an observation about its environment, pursue a particular course of action, and when the environment of the agent changes, the reinforcement signal is determined. Below is a video produced by Andrew Ng’s A.I. Lab at Stanford University utilizing reinforcement learning and apprenticeship learning to fly an autonomous helicopter. The stunts performed in the video are virtually impossible, not to mention incredibly laborious, to program without the help of machine learning.  

 

             

Whether its with the help of supervised, unsupervised, or even reinforcement learning algorithms, its clear much of the progress towards strong A.I. is the result of the advancement in machine learning. The most remarkable aspect of the progress in the machine learning is its potential. Data is gradually becoming the new science, and machine learning holds the answers to tomorrow’s most pressing questions. 

August 5, 2013
Machine Learning: What and Why?

In 1952, Arthur Samuel, while working at IBM, developed a computer program designed to play a game of checkers against itself. Samuel had the computer program, now known as the Samuel-Checkers playing Program, play thousands of games of checkers against itself, and over time, the computer program was able to recognize board positions and patterns which increased the likelihood of winning and other positions or patterns which increased the likelihood of losing. And soon enough, the Samuel-Checkers playing program was able to play checkers better than Arthur Samuel himself.

With the Samuel-Checkers playing Program, Arthur Samuel develop the very first self-learning computer program. Prior to Samuel’s work, the general presumption in computer science was a computer could not perform any sort of task without being explicitly programmed. Samuel effectively disproved this assumption, and coined the term “machine learning, ” which he define as “a field of study that gives computer the ability without being explicitly programmed.” Tom Mitchell, in 1998, offered a more formal, scientific definition: “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E.” In other words, a computer program is said to be self-learning if it can predict a specific outcome with increasing accuracy over time when provided with a data set.

Despite early breakthroughs in machine learning, its applications were limited until relatively recently. The recent popularity of machine learning in the AI community is partially due to society’s skepticism and pessimistic outlook in regards to the future of AI. After nearly 50 years of debate on the plausibility of intelligent systems in the future, the general public lost faith in the potential of artificial intelligence. As a result, when researchers were able to achieve some success in machine learning, it immediately became the central focus of AI. And unlike the rest of AI which relies primarily on heuristics and logic rather than data analysis, machine learning deals with concrete, algorithmic problems with clear measures of progress. The problems in much of AI tend to be ill-defined and ambiguous, whereas ML tends to deal with individual, well-defined problems that can be solved with algorithms—often better than humans can. Another fundamental difference between much of the work being done in AI and the work being done in ML is researchers in AI dedicate their time to programming computers to perform mechanical functions whereas researchers in ML dedicate their time to programming computers to teach themselves. The former is incredibly laborious and requires repeated iteration on the part of the programmer due to innumerable variables at play in reality which aren’t always considered in the lab. The latter, on the hand, is just as if not more laborious but the process of iteration could now be performed by the program itself without any assistance from a programmer. As a result of these factors, machine learning has attracted a lot of attention in the AI community and has become the central focus of much of the AI research being done at the moment.

Before we delve further into ML, its important to explore the philosophical and theoretical underpinnings of machine learning. Machine learning is based on the theory that human intelligence stems from a single algorithm. It can be summarized as the process in which one provides a system with significant amounts of data so the system can learn from data by exploring relationships and recognizing patterns between different data variables. The actual learning component of machine learning is done by complex learning algorithms, which are representative of neural networks in the human brain. These artificial neural networks programmed into the system gather information about the given data sets, extrapolate the data to predict future occurrences, and make logical decisions. The learning algorithms are designed to emulate the human brain by incorporating the principles of the brain such as hierarchical organization and sparse coding principles. As a result, self-learning systems learn in a very similar manner to humans. Our primary method of learning is through a process of trial and error and continuous process of successive approximations. We formulate a theory about a particular task, and based on the result of the first approximation, we adjust our theory accordingly and iterate our approach until we achieve the optimal result. ML algorithms work in a similar manner. As the system gathers more and more data and perform more and more simulations over time, the system will be able to refine its theory and predict the relationship between variables with increasing accuracy.

Machine learning is not without its faults however. Modifying raw data so it can be translated into a computational model or algorithm is very time-consuming. Additionally, at this stage in machine learning,  learning algorithms are not yet capable of replicating artificial general intelligence (human intelligence), meaning computer systems cannot learn at quite the breadth humans are capable of.  Self-learning systems (for now) are limited to one singular task, and thus are limited to learning about one thing and one thing only. For instance, the learning algorithm that helps Google order the search results of its users cannot predict future stock market prices or recognize patterns in the housing market.

Despite these criticisms, ML remains the most promising branch of AI and after over 50 years since Arthur Samuel developed the first self-learning machine, the potential applications of machine learning continue to increase at an exponential rate. Machine learning has become so pervasive today that we often use it without even being aware of it. The spam filter in our email accounts use ML algorithms to differentiate spam from other email messages. Google’s search engine relies on ML algorithms to rank the results of your search according to keywords and your previous search history. The voice recognition systems in our iPhones and Androids depend on machine learning to decode the commands of the user. Facebook’s automatic face recognition system is another application of machine learning. Machine learning has even managed to help us gain a significantly better understanding of the human genome and made self-driving cars possible. Its applications range from DNA sequencing to enterprise solutions and stock market analysis. However, the most revolutionary applications of machine learning have yet to come. The possibilities are endless once machines are capable of artificial general intelligence and machine learning enters the phase of strong AI. Machine learning has the potential of creating a radically different future, more so than any other field of study.  

Amaan

August 4, 2013
#weekender (at Lands End)

#weekender (at Lands End)

Liked posts on Tumblr: More liked posts »