Where is the AI Harbinger of Doom?

In 2013, Theranos, a health technology corporation hit the news with their revolutionary product called the Edison with a potential to change the medical industry for good. Leading the company was a Stanford Chemical Engineering dropout, Elizabeth Holmes. What better fairytale could you have? Holmes fit perfectly into the western media’s obsession with diversity over talent. For most of them, she was the ultimate role model. By 2014, investors across the valley valued the firm at $9 billion. But, what was the actual value of their product? Nada.

Interested in Theranos? Read Indraneel Ghosh’s piece to know more!

Silicon Valley may have given the world some of the most significant technological inventions, but it is also a place which is known for overvaluing almost everything. Theranos was just one instance when it all crashed in front of our eyes. The “fake it until you make it” motto has resulted in cases where companies were valued solely based on the eccentricity of their people and not their product. Hyped up to a pedestal by the Western Media, everyone sees Deep Learning as the next big thing. AI and ML programs at reputed universities are by far the hardest to get into in today’s world. But is all the excitement reasonable? Or is this just another one of the valley’s shams waiting to burst into flames?

Fig-1: Google trends for the term deep learning

While in all likelihood Deep Learning won’t be a disaster as grand as Theranos, the question still remains – how effective will it be in our day to day lives?

Today, everything from self-driving cars to Siri, Alexa and even the diagnostic healthcare industry is implementing Deep Learning to obtain better results. Things like identification of obstacles on the road and detection of cancerous cells in the human body are a cakewalk using the Deep Learning systems. That being said, a massive amount of data must be provided to the deep learning systems to attain an acceptable level of accuracy. Even with the use of transfer learning, a humongous amount of data and calibration is required even to tackle another problem with similar parameters. Contrary to pop culture, there is no master deep neural net which can solve everything, yet. However, such a concept does exist in theory. It is termed Artificial General Intelligence(AGI).

Fig-2: Gartner Hype Cycle for AI

Reinforcement learning was one of the closest attempts at building an AGI until the latest development in the field. OpenAI’s GPT-3 (generative pre-trained transformer) has taken the world of AI by storm. For an average person, the model might look like an autocomplete software akin to their Gmail application. However, from the perspective of an engineer with interest in AI, it is highly touted as the first step towards building a concrete AGI. To gauge the breadth of the dataset, one has to look at the fact that the whole of Wikipedia makes up only 0.6% of its training data. OpenAI has released several commercial APIs which have shown the capability of the model. In the span of a few months, people have used the model to create fiction, music, chatbots and presumably have just scratched the surface. According to an MIT AI researcher Lex Friedman, if we keep up with our current rate of technological advancement, we could train a model which might be as expansive as the human brain by 2032 at the same cost of training GPT-3.

In the case of reinforcement learning, the designer of the model awards penalties and rewards for various decisions; however, there are no restraints on the behaviour of the model. This is easier said than done. Making a reward function for RL requires extreme precision and clarity. It is a game-like situation with heavy usage of game theory in making decisions while interacting with the environment.

Fig-3: Gary Kasparov’s chess game against DeepBlue

Reinforcement learning models have beaten humans at certain games which are governed by rules. They have also been deployed in a limited capacity in self-driving cars to help them chart paths for trips. Garry Kasparov, arguably the greatest chess player of all time, has an interesting take on RL being used for games like chess and Go. In 1997, he lost a chess match to an IBM supercomputer, DeepBlue which ended up being marred in controversy. This became a significant moment in the history of AI. According to him, these games are close-ended systems, i.e. they have a specific aim. However, the AI program created to win the games became redundant if the tactical knowledge gained by the AI program cannot be transferred to open-ended systems. This reveals a severe limitation, i.e. these programs can achieve the required level of proficiency only by playing countless times, consisting of many wild-goose moves that make no sense to a human. Humans have a different way of approaching a problem; in this case, learning how to play a game. They would first learn the rules of the game, gain some experience and attain baseline gameplay. The same baseline gameplay is achieved with great difficulty by an AI program due to the lack of tactical knowledge.

Moving on to commercial terms, deep learning has been utilized for a lot of things. Some of the most successful deployments have been targeted advertisements and recommendation systems. Popular OTT services like Netflix, Prime Video, etc. have become masters of harnessing customer data. With the amount of watch history they have, deep learning has enabled them to keep users hooked to the screen. Although binge-watching can be attributed to our shortfalls, it still feels good to put some of the blame on these services. Google earns 85.5% of its revenue from its ad services, which use various Information Retrieval and Machine Learning models. Not surprisingly, it can perform better than a human in predicting a user’s wants.

However, companies like Google, Facebook (and Theranos) are guilty of promising innovative and out-of-the-box products which ultimately do not work. Many of these products are made attractive by using ‘deep learning’ as trap phrases, but finally, deep learning has to face the backlash for the over-achieving mindsets of these companies. Another reason for the hype behind deep learning is the element of mystery associated with it, the so-called “black-box problem”.

Fig-4: Graphical representation of a neural network

The information spewed by the hidden layers is barely comprehensible due to the lack of any reasonable justification for the architectural decisions being taken before running the experiments.

While it can be argued that Deep Learning algorithms might not do well in solving open environment problems with a significant element of uncertainty, the same cannot be said about controlled settings. A well designed Deep Learning Framework is potentially invincible in a controlled setting with accuracies close to 100%. Such controlled settings can be found in supply line factories where labourers are assigned specific tasks. A bot can not only replace this section of the workforce. It can do the same task in a similar amount of time with greater efficiency, purely because there is no element of surprise or imagination involved in the process. Deep Learning techniques have the power to reshape the manufacturing industry in particular. Surprisingly, the competition in this sector is rather scarce. Barring LandingAI, there are no major tech startups that have ventured actively into this domain.

Fig-5: McKinsey’s analysis of the potential of AI in different sectors

All this does not mean the recent big wave of AI has been for nought. The advances are here to stay, including steady grind of AI automation eliminating tedious tasks and positions, continued advances in robotics, increased partial autonomy in cars, better interfaces and query results including search and others.

For startups, there will be less hype and buzz, meaning it will be tougher to get funded just on magical buzzwords such as AI alone. That may sound like bad news, but it really isn’t as there will be more committed and severe efforts to create real products as opposed to opportunistic quick flip-jobs.

It may also look like it will be tough for startups to compete for lack of data and supposedly ever-growing compute resources. Huge models such as BERT and GPT-2 have billions of parameters requiring hundreds of thousands of dollars just to train. But those are simply brute force approaches with clear limitations, and this supposed difficulty for startups to compete is just an illusion. For instance, BERT does not scale as it relies on snippets produced by traditional search, and it requires significant resources and completely different architectures, as Google said themselves in their recent BERT search update.

Deep Learning might not quite be the pipe dream that Theranos was, but as it stands today the technology faces tremendous challenges before it can achieve all that it promises. Odds are, in the foreseeable future, AI won’t be able to replace a large chunk of human jobs which involve some amount of imagination. As of today, they cannot work in an uncontrolled open system with the efficiency of humans, something that should make you sigh in relief. Oh, and what about the doomsday AI? Another pop culture fantasy by any stretch of the (current) imagination.

References

  1. https://sibesh.com/essays/the-robots-must-be-crazy/
  2. https://www.kdnuggets.com/2018/02/current-hype-cycle-artificial-intelligence.html
  3. https://medium.com/@manjotpahwa/is-machine-learning-still-a-hype-261a7280f9b7
  4. https://www.google.co.in/amp/s/www.zdnet.com/google-amp/article/garry-kasparov-is-surprisingly-upbeat-about-our-future-ai-overlords/
  5. https://www.google.co.in/amp/s/www.businessinsider.com/theranos-founder-ceo-elizabeth-holmes-life-story-bio-2018-4%3famp
  6. https://www.theverge.com/21346343/gpt-3-explainer-openai-examples-errors-agi-potential
  7. https://www.youtube.com/watch?v=kpiY_LemaTc

About the Author

Articles

Leave a Comment

Your email address will not be published. Required fields are marked *