What is AI, and how did we get here?

May 12, 2023
 

Artificial Intelligence (AI) is extremely difficult to define. There’s also intense debate as to how “intelligent” AI is. A debate evolving as rapidly as the technology itself.

My French colleague Jocelyn Jovene has written extensively on it, and I have taken almost all the content from him (see sources at the end of the article).

What is it?

"Artificial intelligence is the field devoted to building artificial animals and, for many, artificial persons (or at least artificial creatures that – in suitable contexts – appear to be persons)". - Stanford Encyclopedia of Philosophy

"AI is a programmed ability to process information". - US Defense Advanced Research Projects Agency (DARPA)

"It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable". - John McCarthy in this 2004 paper.

How we got here

In a 1950 article published in Mind magazine, Alan Turing asked: can machines think?

This question perplexed philosophers and thinkers for centuries, most notably René  Descartes in his 1637’s Discours de la Methode. They questioned how it would be possible, at some point, to differentiate machines from humans.

In 1956, a DARPA-sponsored conference took place in Dartmouth College in New Hampshire. Among participants where Professor John McCarthy, Claude Shannon, Marvin Minsky, Arthur Samuel, Trenchard Moore, Ray Solomonoff, Oliver Selfridge, Allen Newell and Herbert Simon.

This conference is thought to be the first time when the term “artificial intelligence” was used. The event’s purpose was “to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can, in principle, be so precisely described that a machine can be made to simulate it.”

Chatbots to ChatGPT

AI technology has been through a number of evolutionary steps, mostly possible because of the significant increase in computing and storage power over the last 15 years.

The public launch of ChatGPT in November 2022, by U.S.-based company OpenAI, made the world realise that, in some form, AI is capable of sustaining a seemingly reasonable discussion about any topic.

Chatbots have been available in various forms for years (Apple launched Siri in 2010). But the success of ChatGPT shows that AI has the potential to disrupt many industries, and be a useful complement to human activities, rather than a replacement (even though this will also be true in some cases).

ChatGPT might give the impression of reasoning ability, but it is just using statistics to add one word after another when answering a question. It is massively dependent on the existing body of internet information and sheer volume of data it is trained on.

Yet, the progress in AI and Machine Learning cannot be overlooked.

The Progress

They are already embedded in many products like smartphones, bringing new capabilities such as speech and image recognition.

Those technologies help with fraud/spam detection, content moderation, mapping, weather forecasting, supply chain management and many other tasks.

According to a McKinsey report, adoption of AI has more than doubled between 2017 and 2022, with a larger number of companies making investments in AI to improve their operations and be more competitive.

Over a 10-year horizon, Goldman Sachs estimates AI could boost global GDP by 7 percentage points. In its assessment, the bank states that up to a quarter of U.S. jobs could be replaced by AI/automation while the vast majority would use AI as a complement to existing activities.

The Obstacles

The real challenges will come when we try to segregate content/interactions coming from humans or from machines.

"By exploiting newer augmented reality and virtual reality technologies, the ability to synthetically create complex environments and allow for human interaction will blur the lines between realities and will increasingly open a huge new set of ethical and legal challenges regarding their proper use', according to Jeffrey L. Turner and Matthew Kirk from law firm Squire Patton Boggs.

The massive use of AI raises questions relating to intellectual property: who owns the content the technology uses to train itself and become more efficient? These issues have so far not properly been dealt with. There are also questions of “computer ethics” which govern how to properly manage machine-human interactions.

The idea of endowing machines with a moral code – which code to choose is a separate problem – is one of the questions AI will have to deal with (which is part of the field of deontic logic).

The Future

"The 21st-century technologies – genetics, nanotechnology, and robotics (GNR) – are so powerful that they can spawn whole new classes of accidents and abuses. Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them". - Bill Joy

He posits that machines (using AI) would become so powerful that we would depend on them and have to accept their decisions.

Not everybody agrees with this gloomy view. And despite the huge progress in recent years, the idea of “Strong AI” – the ability to create machines with mental capabilities and consciousness – still seems far away.

AI’s progress also raises wider economic questions, such as the need for a basic universal income (an idea proposed by many thinkers, including AI expert Martin Ford). After all, a number of jobs or activities that are usually done by humans will be done by AI-driven robots. Progress in this field is already impressive (see for instance this video from Boston Dynamics).

At the World Economic Forum, Microsoft's chief economist Michael Schwarz issued a warning.

He used the example of the invention of cars, which he said was a "wonderful invention" but that internal combustion engines still causes thousands of deaths. "I hope AI will never ever become as deadly as internal combustion engines, but I'm quite confident that AI will be used by bad actors and will cause real damage," he told delegates at the conference.

Schwarz welcomed regulation of AI but said that lawmakers should tread carefully. "Once we see real harm, we have to ask ourselves the simple question: 'Can we regulate that in a way where the good things that will be prevented by this regulation are less important?'" Schwarz said.

Google's former "Godfather of AI" Geoffrey Hinton recently warned of the misuse of AI, in an interview with the New York Times. "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton said, explaining the reasons behind his resignation from the tech giant.

All the above information has been sourced from:

Thinking Machines: AI's Promise and Peril by Jocelyn Jovene, senior financial analyst and senior editor, Morningstar France

AI Could 'Cause Real Damage', says Microsoft's Chief Economist by Dow Jones

The state of AI in 2022—and a half decade in review by McKinsey

Generative AI could raise global GDP by 7% by Goldman Sachs

A DARPA Perspective on Artificial Intelligence

Larissa Fernand is an Investment Specialist and Senior Editor at Morningstar India. You can follow her on Twitter
Add a Comment
Please login or register to post a comment.
© Copyright 2024 Morningstar, Inc. All rights reserved.
Terms of Use    Privacy Policy
© Copyright 2024 Morningstar, Inc. All rights reserved. Please read our Terms of Use above. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
As of December 1st, 2023, the ESG-related information, methodologies, tools, ratings, data and opinions contained or reflected herein are not directed to or intended for use or distribution to India-based clients or users and their distribution to Indian resident individuals or entities is not permitted, and Morningstar/Sustainalytics accepts no responsibility or liability whatsoever for the actions of third parties in this respect.
Company: Morningstar India Private Limited; Regd. Office: 9th floor, Platinum Technopark, Plot No. 17/18, Sector 30A, Vashi, Navi Mumbai – 400705, Maharashtra, India; CIN: U72300MH2004PTC245103; Telephone No.: +91-22-61217100; Fax No.: +91-22-61217200; Contact: Morningstar India Help Desk (e-mail: helpdesk.in@morningstar.com) in case of queries or grievances.
Top