Although the estimated market growth for Deep Learning is $10.2 billion by the end of 2025, it has significant bottlenecks that need to be addressed.

Challenges in Deep Learning (DL) Technology:

1. Deep Learning requires Quality and a Large Amount of Data:

Deep Learning algorithms are trained to learn progressively using the data. Lots of data sets are needed to ensure the machine delivers the desired results. Just the way the human brain requires a lot of experiences to learn and deduce information, the analogous artificial neural network also needs a copious amount of data. DL best works when it has lots of quality data, and this performance increases as the availability of data grow. Likewise, the DL system fails badly when the quality data isn’t available enough to feed the system.

2. Optimised Hyperparameters:

The Hyperparameters are those, whose values are defined before the commencement of the learning process. Any change in the value of such parameters even by a small amount can invoke a significant shift in the performance of the model. Relying on default parameters and not performing the Hyperparameter optimisation could have a tremendous impact on the performance model.

In 2017 researchers fooled Google’s DL system into making errors after altering the data available by adding a “noise.” The researchers’ forcefully unearthed the errors related to trivial matters, like mistaking rifles for a helicopter in an image recognition algorithms, which exemplifies the dependence of DL on the right quality and quantity of data for it to perform accurately. The Google researchers explained that – “The algorithms, unlike humans, are susceptible to a specific type of problem called an “adversarial example.” These are specially designed optical illusions that fool computers into doing things like, mistake a picture of a panda for one of a gibbon. They can be images, sounds, or paragraphs of text. Think of them as hallucinations for algorithms.”

If by such a small input variation in quality data can have such significant outcomes on the result and the predictions, there is a real need to ensure the accuracy and stability in DL. An adversarial example could thwart the AI system that controls a self-driving car causing it to mistake a stop sign for a speed limit. Therefore, in some of the industries, such as Industrial Applications, due to the availability of insufficient data might limit the adoption of Deep Learning.

3. High-Performance Hardware Requirement:

To train the data sets in a Deep Learning solution requires an enormous amount of data. For the task to be performed and solve real-time problems, the machine needs to be equipped with sufficient processing power. To ensure high efficiency and less time consumption, the data scientists move to multicore high performing GPUs and processing units which are expensive and consume a lot of power.

Also, the Deep Learnings systems at the Industry level require high-end Data Centers whereas smart devices like robots, drones, mobile and other devices need small and efficient processing units. Deployment of DL solutions in the real world thus becomes a power consuming and costly affair.

4. Lack of Multitasking and Flexibility:

Deep Learning models once trained can deliver phenomenally accurate and efficient solution to a specific problem. Although, in the present landscape, the neural network architectures are highly specialised to a particular domain of application. Google’s DeepMind’s Scientist Raia Hadsell has found that “There is no neural network in the world, and no method right now that can be trained to identify objects and images, play Space Invaders, and listen to music.”

Most of the systems work on this format and are incredibly good at solving one problem. Solving similar problems also require reassessment and retraining. Research Scientists are working hard to develop the DL models that can multitask without needing to rework on the entire architecture. Despite small advancements in the multitask using Progressive Neural Networks, there is significant progress yet to be done towards Multi-Task Learning (MTL). However, researchers from Google Brian Team and the University of Toronto presented a paper on MultiModel, a neural network architecture which draws from the success of vision, audio networks, language for simultaneously solving many problems from multiple domains, including speech or image recognition and language translation.

5. Security Concerns for Deep Learning:

Deep Learning networks potentially have some exciting applications for empowering cybersecurity. However, when you take a step back to the system themselves and keeping in mind the propensity for the outputs from these models to alter after the input modifications, these networks might be vulnerable to malicious attacks. For, eg, Self-driving vehicles are partially powered by DL. If one were to access the DL model and alter the inputs, the vehicle behaviour could be potentially controlled maliciously by the perpetrator. This highlights the black-box attack on several DL networks that might result in them making misclassifications.

6. Investments:

The funding or capital required for Deep Learning is enormous, whereas, the ROI is long-term basis depending on how successful the company in being able to build a solution that will solve the real-time problems for the industry verticals. Google acquired the UK based Artificial Intelligence start-up DeepMind reportedly for approx $525 million, the company’s co-founder Demis Hassabis described its vision to “Solve Intelligence, and using that to solve other problems in this world.” However, all of this research and development comes at a considerable price. The company needs the next level of investment to continue with its advanced level research activities. The management service fee includes the costs of real estate, running and maintenance of its high-end computer systems and other infrastructures. The most significant loss it incurred was for expenses towards staff costs, payroll, office software and hardware and also stock-based compensation.

To summarise, Deep Learning although a subset of Machine Learning which is a subset of Artificial Intelligence is not flawless. While exploring new and unexplored territories of Cognitive Technology, it is but, natural to come across specific bottlenecks and difficulties which is a case with any technological advancement. However, no challenges are insurmountable. The data scientists and developers are working tirelessly to refine and enhance the underlying models in DL. I am sure in the next five to ten years DL is going to make a significant breakthrough and impact in the world of Artificial Intelligence disrupting every Industry and individuals way of living, working and doing things.