Background
Nowadays, Deep learning systems have achieved human-compatible success in predicting the labels in almost all domains. Many artificial intelligent subdomain techniques are applied in variety of appliances from Malware Detection(Kumar,2020), Object Recognition(Bayraktar,2019),Image Classification(Ahuja, 2020) (Rajagopal,2020), Speech Recognition(Llombart, 2021), Natural Language Processing(Do, 2021), Medical Science(Esteva, 2017), Satellite Applications(Kumar,2020), to Facial Recognition systems(Menon, 2021). With the growing adoption of deep neural networks by many companies, DNN the use of DNN in safety-critical environment applications including, Drones, Robotics, Voice Recognition, Self-driving cars like Uber, Apple & Samsung, Tesla(Lex,2019), Surveillance systems (Pillai,2021), Apple Siri(“Apple,” 2019), Amazon Alexa(2019), Etc.
However, the deep neural network is prone to adversarial attacks(Szegedy,2014). With the increasing boom of deep neural networks, their security has become an essential consideration in all industries. This study presents an empirical survey of different approaches to generate adversarial examples from the computer vision field. According to (Krizhevsky et al.,2012), deep learning is a point of convergence in visual perception. Neural networks learn from a large amount of data like humans learn from circumstances. Deep neural learning accomplishes actions repeatedly and tunes them with a slight error loss to improve the outcome. Figure 1 shows a timeline of deep neural networks from 1940 to 2018.