top of page
Artificial Intelligence
AI, the sophisticated technology of the modern age, has the capabilities to revolutionize the world with its intelligence. It is a form of intelligence that can learn, synthesize, and infer information, all while exhibiting a level of intelligence that is above the capabilities of humans or animals.
pexels-tara-winstead-8386440.jpg
AI in finance
AI in customer service
AI in management
AI in marketing
What is artificial intelligence?

The term artificial intelligence (AI) has been given several different definitions over the past few decades. John McCarthy proposes the following definition: "It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

Alan Turing's groundbreaking book "Computing Machinery and Intelligence," which was released in 1950, however, marks the beginning of the artificial intelligence debate decades before this term. Can machines think? is the question Turing, who is frequently referred to as the "father of computer science," poses in this essay. Then he proposes a test that has become commonly known as the "Turing Test," in which a human interrogator would attempt to differentiate between a computer-generated and a human-written text response. Although this test has been under intense criticism since it was published, it nonetheless contributes significantly to the history of AI and continues to be a topic of discussion in philosophy because it makes use of linguistic concepts.

After that, Stuart Russell and Peter Norvig published Artificial Intelligence: A Modern Approach, which went on to become one of the most influential works on the subject. In it, they explore four potential objectives or definitions of AI, differentiating between computer systems based on their reasoning and thinking vs acting:


Human perspective: 
 
  • Systems that think like humans
  • Systems that act like humans

​

Ideal perspective:

 

  • Systems that think rationally

  • Systems that act rationally

​

Systems that behave like humans would fall under Alan Turing's notion of computers. 

 

Artificial intelligence is a topic that, in its most basic form, combines computer science and substantial datasets to facilitate problem-solving. Additionally, it includes the branches of artificial intelligence known as deep learning and machine learning, which are commonly addressed together. These fields use AI algorithms to build expert systems that make predictions or categorize information based on incoming data.

​

The development of artificial intelligence is still the subject of much hype, as is the case with many newly introduced technologies. Product innovations, such as self-driving cars and personal assistants, are noted to follow "a typical progression of innovation, from overenthusiasm through a period of disillusionment to an eventual understanding of the innovation's relevance and role in a market or domain," according to Gartner's hype cycle. We are nearing the bottom of inflated expectations and the peak of disillusionment, as Lex Fridman noted in his 2019 MIT address.

​

As conversations emerge around the ethics of AI, we can begin to see the initial glimpses of the trough of disillusionment.

​

Types of artificial intelligence—weak AI vs. strong AI

​

Weak AI, also known as Narrow AI or Artificial Narrow Intelligence (ANI), is AI that has been programmed and directed to carry out particular tasks. The majority of the AI that exists today is weak AI. This form of AI is anything but weak; it supports some incredibly sophisticated applications, including Apple's Siri, Amazon's Alexa, IBM Watson, and autonomous vehicles. "Narrow" could be a better term for it.

​

Artificial General Intelligence (AGI) and Artificial Super Intelligence make up strong AI (ASI). A computer with intellect comparable to humans, a self-aware awareness, and the capacity to learn, reason, and make plans for the future would be said to have artificial general intelligence (AGI), also known as general AI. Superintelligence, commonly referred to as artificial super intelligence (ASI), would be more intelligent and capable than the human brain. Even though there are now no real-world applications for strong AI and it is only theoretical, experts in the field of artificial intelligence are continuously studying its potential. Until then, science fiction works like 2001: A Space Odyssey's HAL, the superhuman, rogue computer helper, may provide the best examples of ASI.

​

Deep learning VS Machine learning

​

Given that deep learning and machine learning are frequently used interchangeably, it is important to understand their differences. In addition to being subfields of artificial intelligence, deep learning is also a subfield of machine learning

​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Deep learning is actually comprised of neural networks. “Deep” in deep learning refers to a neural network comprised of more than three layers—which would be inclusive of the inputs and the output—can be considered a deep learning algorithm. This is generally represented using the following diagram:

​

​

​

​

​

​

​

​

​

​

 

 

The way each algorithm learns is where deep learning and machine learning diverge. Deep learning significantly reduces the amount of manual human interaction necessary during the feature extraction phase of the process, allowing for the usage of bigger data sets. Cited Lex Fridman pointed out in the same MIT presentation as above, "scalable machine learning" is what deep learning is. Traditional, or "non-deep," machine learning is more reliant on human input. To grasp the distinctions between different data inputs, human specialists create a hierarchy of features, typically learning from more structured data.

​

Although "deep" machine learning can use labeled datasets, commonly referred to as supervised learning, to guide its algorithm, it is not necessary. It can automatically discover the hierarchy of features that separate distinct types of data from one another and ingest unstructured material in its raw form, such as text and photos. We can scale machine learning in more exciting ways since it doesn't require human intervention to handle data, unlike machine learning.

​

Artificial intelligence applications

​

AI systems have a wide range of practical applications nowadays. Some of the most typical examples are provided below:

​

  • Speech recognition: It is a capability that employs natural language processing (NLP) to convert spoken words into written ones. It is also known as automatic speech recognition (ASR), computer speech recognition, or speech-to-text. Many mobile devices have speech recognition built into their operating systems to enable voice search (like Siri) and to increase messaging accessibility

​

  • Customer service: Along the client journey, online virtual agents are replacing human agents. They provide individualized advise, respond to frequently asked questions (FAQs) regarding subjects like shipping, or cross-sell products or make size recommendations to users, altering the way we view user interaction on websites and social media. Examples include virtual agent-equipped messaging bots on e-commerce websites, chat programs like Slack and Facebook Messenger, and jobs often carried out by virtual assistants and voice assistants

​

  • Computer vision: With the aid of artificial intelligence (AI), computers and other systems are now capable of extracting useful information from digital photos, movies, and other visual inputs and acting accordingly. It differs from picture recognition jobs in that it can make recommendations. Computer vision, which uses convolutional neural networks, is used for self-driving cars in the automotive sector, radiological imaging in healthcare, and photo tagging in social media

​

  • Recommendation engines: AI algorithms can assist in finding data trends that can be leveraged to create more effective cross-selling strategies by using historical consumption behavior data. Online shops utilize this to suggest pertinent add-ons to customers during the checkout process

​

  • Automated stock trading: AI-driven high-frequency trading platforms execute hundreds or even millions of deals every day without the need for human participation in order to optimize stock portfolios

​

Key dates and figures in artificial intelligence history

​

Ancient Greece is when the concept of "a machine that thinks" first appeared. However, significant occasions and turning points in the development of artificial intelligence since the invention of electronic computing (and in relation to some of the subjects covered in this article) include the following:

​

  • 1950: Publishing Computing Machinery and Intelligence is done by Alan Turing. Turing, who gained notoriety during World War II for cracking the Nazi ENIGMA code, proposes in the paper to address the question of "Can machines think?" and introduces the Turing Test to ascertain whether a computer can exhibit the same intelligence (or the outcomes of the same intelligence) as a human. Since then, people have argued over the Turing test's usefulness

​

  • 1956: The phrase "artificial intelligence" is first used by John McCarthy at the inaugural AI conference at Dartmouth College. (McCarthy later created the Lisp language.) Allen Newell, J.C. Shaw, and Herbert Simon develop the Logic Theorist later that year, which is the first functioning AI software ever

​

  • 1967: T1he Mark 1 Perceptron, created by Frank Rosenblatt, was the first machine built on a neural network that "learned" by making mistakes. Perceptrons, written by Marvin Minsky and Seymour Papert, is published just a year later. It quickly establishes itself as a classic work on neural networks while also serving as, at least temporarily, a counterargument to further neural network research

​

  • 1980s: In AI applications, neural networks that train themselves via a backpropagation technique are frequently employed

​

  • 1987: In a chess match, IBM's Deep Blue defeats former world champion Garry Kasparov (and rematch)

​

  • 2011: At Jeopardy!, IBM Watson defeated winners Ken Jennings and Brad Rutter

​

  • 2015: Convolutional neural networks, a specific type of deep neural network, are used by Baidu's Minwa supercomputer to detect and classify images more accurately than the average person

​

  • 2016: Lee Sodol, the current world champion Go player, is defeated by DeepMind's AlphaGo software in a five-game battle. Given the enormous number of possible moves as the game develops (more than 14.5 trillion after just four plays! ), the victory is noteworthy. Later, Google reportedly paid $400 million to buy DeepMind

artificial-neural-network-3501528__480.webp
AI benefits for business
bottom of page