21 - Big Data and Machine Learning (w/ Divyanshi Srivastava!)

21. Big Data and Machine Learning (w/ Divyanshi Srivastava!)

More and more fields collect as much data as possible in order to draw the most principled and specific conclusions possible. Still, without analyzing these data, there is no simple way to draw a conclusion. How do humans use computational resources to analyze data? What is big data and machine learning? Let’s learn to be scientifically conversational.

 
Divyanshi_square.jpg

General Learning Concepts

1)    What is machine learning, big data, and AI?

a.    Machine Learning: “is the ability of machines to learn through data, observations and interacting with the world. That acquired knowledge allows computers to correctly generalize to new settings. ” –Yoshua Bengio.

In other words, machine learning refers to mathematical models that can find patterns in observed data, but also generalize well to unobserved data.

b.    The need for “big” data: The more data a mathematical model is exposed to, the more likely it is to pick up generalizable patterns as against sample specific noise. For example, imagine that you develop a model to predict the outcome of a soccer match based on two variables: player ratings & whether the game was home or away. If the model is only “exposed” to three matches, each of which were won by the home team, the model will assign a disproportionately high weight to a “home” game, thus failing to generalize. On the other hand, exposure to a large number of games will eliminate sample specific noise! The more complex the models become, the more data you’ll need! [2]

i.    Collection: Often, now instead of collecting an average (mean, median, mode) value of some piece of data, data collectors will take individual measurements of many things. For example: instead of averaging a survey taken on Facebook, the organization may just collect all relevant pieces of personal data from your profile and store them. Another example: amazon must log every single sale it makes.

ii.    Analysis: An article by David Bullock at the North Dakota State University presses how big data and corn are similar: corn in a silo has not been made into a product and will waste away. Big data does not necessarily mean anything until being processed into trends.

c.     Artificial Intelligence: AI are not necessarily your “2001 – A Space Odyssey” or hyper-intelligent robots. Alan Turing (1912 - 1954) generally was regarded with the concept origin with the simple definition of a “thinking machine”. That definition has been expanded to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.” [2]

i.    Examples of AI in science: 1. Siri and Alexa are pseudo-intelligent digital personal assistants, capable of recognizing voices, using machine-learning to become more intelligent and to be able to better predict questions and requests. 2. Nest, a thermostat acquired by Google, predictively learns from your temperature needs to adjust based on what you have used before.

d.    What is machine learning? John McCarthy (1927 – 2011) was also instrumental in coining the term artificial intelligence; maybe best known for leading his team to pursuing algorithms to teach computers how to teach themselves tasks. This strategy relies on statistics, which requires more and more storage and processing power, which is why machine learning has become more manageable each year.

i.    Example from Chris Meserole, fellow in Foreign Policy at the Brookings Institution: “The core insight of machine learning is that much of what we recognize as intelligence hinges on probability rather than reason or logic. If you think about it long enough, this makes sense. When we look at a picture of someone, our brains unconsciously estimate how likely it is that we have seen their face before. When we drive to the store, we estimate which route is most likely to get us there the fastest. When we play a board game, we estimate which move is most likely to lead to victory. Recognizing someone, planning a trip, plotting a strategy—each of these tasks demonstrate intelligence. But rather than hinging primarily on our ability to reason abstractly or think grand thoughts, they depend first and foremost on our ability to accurately assess how likely something is. We just don’t always realize that that’s what we’re doing.”

2)    Supervised vs. unsupervised learning:

a.    Supervised: The goal of learning is to find specific relationships or structure in the input data that allows us to effectively produce correct output data. An external teacher provides a category or some modifier for a piece of data in question. Essentially, some “training data” is used to account for the actual data that will be processed down the road. [2] [3]

i.    For example: You have a basket of fruit and train the computer to recognize red, indented fruits as apples and yellow-green oblongs as bananas. You then show the computer a banana, and it recognizes it as such.

b.    Unsupervised: The goal of this type of data is to learn the inherent structure of our data without using explicitly-provided labels. The computer makes its own clusters of data, sometimes relating them in some way from (potentially) earlier assigned categories. [2] [3]

i.    For example: You provide the computer pictures of cats and dogs. The computer might categorize them according to their similarities, patterns, and differences. You have not provided any training data or examples.

3)    Common examples of big data sets today:

a.    Social media: Social networks collect personal information on users and create personalized environments for those users. This also allows for targeted advertisements for users.

b.    ‘Brick-and-mortar’ and ‘click-and-order’ sales: Walmart saw significant sales of strawberry pop tarts before hurricanes. This data trend led to moving these same pop tarts to the check out aisle before hurricanes. Amazon uses user data to determine where trends are moving in order to have adequate supplies of those items to sell before they ever become popular.

c.     Crime: Legal teams and police forces can use aggregations of crime type, dates, time of crimes, and the locations of those crimes to predictively map where to have police positioned in certain cities from past crimes.

4)    Fun Tidbits

a.    How does an AI learn to play a game? Often for gaming AI, humans are not interested in pushing the limits of the AI, because we would rather have a fun and engaging level. In 1996, IBM's Deep Blue was the first computer to defeat a world champion (Garry Kasparov) in chess. The AI behind Deep Blue used a brute-force method that analyzed millions of sequences before making a move. Still, chess is not considered to be a complicated game by today’s standards. That led to an interest to have an AI play the ancient Chinese game Go. In 2016, researchers at Google-owned AI company DeepMind created AlphaGo, a Go-playing AI that beat Lee Sedol, the world champion, 4 to 1 in a five-game competition. AlphaGo replaced the brute-force method of Deep Blue with deep learning, an AI technique that works in a much more similar way to how the human brain works. Instead of examining every possible combination, AlphaGo examined the way humans played Go, then tried to figure out and replicate successful gameplay patterns.

Even still, this environment is not necessarily as challenging as it could be. The AI is playing a turn-based game that doesn’t have a constantly changing environment. [2]

5)    Solicited Naïve Questions

a.    Is AI going to take over the world? Between 400 million and 800 million jobs throughout the world are estimated to be replaced by AI by 2030, according to a study by the McKinsey Global Institute. However, experts argue that while job replacement might happen, typically innovation opens new jobs at the same time. Still, this is an area of great debate today. [2]

 
Calvin YeagerComment