This page is not available in your selected language. Your language preference will not be changed but the contents of this page will be shown in English.

To change your current location please select from one of Julius Baer’s locations below. Alternatively if your location is not listed please select international.


Please select
Additional e-Services

*The location identified is an approximation based on your IP address and does not necessarily correspond to your citizenship or place of domicile.


Sign up for Insights newsletter


Sign up for Insights newsletter

The ups and downs of artificial intelligence

“Machines will be capable, within 20 years, of doing any work a man can do”, proclaimed Nobel laureate Herbert A. Simon in 1965. More than 50 years later, machines beat humans in many areas, but Simon’s vision is still far from being a reality.




Artificial intelligence is on everybody’s mind today. But, rather than steadily rising in the collective consciousness alongside technological progress in the field, AI has experienced periods of extreme hype and periods of strong disillusionment since the term was first coined in 1956 at the Dartmouth Conference in the United States. In the very early stages of AI development, scientists and media reports made utopian claims around the potential behind AI. Herbert A. Simon, Nobel laureate in economics, wrote in 1965: “Machines will be capable, within 20 years, of doing any work a man can do.” Marvin Minsky, another pioneer of early AI research in 1970 proclaimed: “In from three to eight years we will have a machine with the general intelligence of an average human being.” 

“AI can only solve ‘toy’ versions of real-life issues” 
However, researchers were not able to deliver on the lofty promises associated with AI. In 1973, the British parliament commissioned a thorough investigation of the state of research in AI. The resulting Lighthill report stated that AI was not able to achieve anything that could not also be achieved in other sciences. The report concluded by proclaiming that most successful AI algorithms would grind to a halt on real world problems and they were only suitable for solving ‘toy’ versions of real-life issues. The UK subsequently significantly scaled back government funded AI research projects. Around the same time, the US Department of Defense (one of the largest commissioners of AI research at the time) also decided to withdraw funding from the majority of AI related research projects due to a lack of satisfactory progress. 

AI research picks up again thanks to ‘expert systems’
During the period from 1974 to 1980, funding largely dried out and the field entered into what is now referred to as the first ‘AI winter’. After 1980, however, the collective opinion concerning AI shifted again. So-called ‘expert systems’ were starting to pick up steam. In essence, these systems were trying to mimic intelligence through a massive number of predefined if-x-then-y statements. Corporations began to see the potential of expert systems for business applications and AI related spending increased from a few million in 1980 to over a billion dollars by 1985. However, the early success of expert systems was not destined to last. The systems proved expensive to maintain and incapable of scaling up to deal with very complex problems, since every possible point of decision needed to be pre-programmed for them to work. 

The personal computer causes the second ‘AI winter’
The rise of the personal computer (PC) from IBM and Apple coincided with the fall from grace of expert systems in 1987. The US Department of Defense once again made funding cuts to AI research, the corporate sector shifted its attention to the PC and many of the providers of expert systems filed for bankruptcy. The second AI winter started and lasted until around 1993, at which point research very gradually started to pick up again. Pulitzer prize winner John Markoff wrote that many AI researchers at certain points in time deliberately used different names, such as informatics, analytics or cognitive systems, to describe their work in an attempt to avoid the stigma of false promises associated with artificial intelligence. 

IBM’s Deep Blue starts the comeback of AI
In the latter half of the 90s, applications of AI started to show potential again. At their second confrontation in 1997, IBM’s Deep Blue managed narrowly beat Garri Kasparow 3.5 to 2.5 in a best-of-six game of chess. At the DARPA Grand Challenge 2005, a 212km driverless car competition in the Nevada desert, the driverless vehicles of four teams managed to cross the finish line in less than ten hours. Deep learning (DL) started coming into the collective conscienceness in 2011 after a DL-based convolutional neural network (CNN) model was the first software solution to achieve superhuman performance in a visual pattern recognition contest. In 2012, a similar CNN model won the ImageNet classification challenge by a significant margin against the non-deep-learning based competitors. Four years later, Google’s AlphaGo defeated the world champion in the Chinese game of Go and in January 2019, their AlphaStar AI challenged professional players of the competitive video game Starcraft II and won 10 out of 11 matches. 

Today, AI is hailed as the next potential general purpose technology (GPT), just like steam and electricity. Proposed groundbreaking applications across a wide array of industries seem to be not far into the future, from autonomous driving in the car industry to rapid above-human level capacity in skin cancer diagnostics in the health care space. AI has truly arrived back in the collective zeitgeist.