New Age of AI – AI Fundamentals, Impact and Outlook

21 min read

Product Development

single post blog featured image

Ever since OpenAI released version 3.5 of ChatGPT to the public, everyone has been talking about AI. This technology raises immense hopes, both for the economy and society as a whole, as well as deep fears. It promises to further unite the world and allow every single person to share in its benefits but also harbours the risk of creating new divides and deepening the existing ones – for example, between technologically advanced countries and social classes and companies, and those that are technologically distant or less tech-savvy.

Today, we can only be sure of one thing: We are at the beginning of a development that will expand into every area of life and business and that is progressing at a speed that we have never experienced in the entire history of mankind. It is, and will be, a major challenge to steer this development in such a way that we get the best out of it in the long term – and avert problems and dangers that are difficult to get under control again.

This first article – and you can look forward to more – is intended to help strengthen your understanding of this development and help prepare you and your companies for this wild ride.

What Is Artificial Intelligence?

AI is nothing really new in itself. The term Artificial Intelligence was introduced as early as 1956 by John McCarthy during the Dartmouth Conference, which is seen as the starting point of AI as a field of research. It describes algorithms and models that allow computers to perform tasks that typically require human intelligence.

Alan Turing and Isaac Asimov in particular were already thinking about this topic years before the Dartmouth Conference. After all, it is an old human fantasy, and just as great a desire, to create entities that do our work for us – and confirm us as great creators.

Evolution of AI 

1) The Past

The development of AI has been rocky – repeatedly driven by high hopes and repeatedly characterised by disappointments. After the first theoretical considerations by Alan Turing – who devised the Turing Test, which was intended to decide whether a machine could think for itself, or rather whether it was possible to distinguish between the thinking of a machine and that of a human being – and Isaac Asimov, who thought about how simple rules could be used to ensure that an automaton or robot would not harm humans, many scientists began working on AI in 1956.

AI fundamentals - past, present and future of AI

This so-called Golden Age of AI was driven by great expectations and had great goals. The pinnacle was to be a computer that possessed all of humanity’s knowledge and, based on this, could solve all of humanity’s questions and problems.

However, it turned out that the scientists and developers were unable to find a solution even for simple problems. They were unable to generate usable automatic translations or recognise spoken language. In 1973, the Lingthill Report predicted that machines would always remain at the level of an experienced amateur. This dashed great hopes, AI research was no longer funded and the so-called AI winter prevailed until 1980.

2) The Present

Developments in areas such as computer technology and the ever-increasing amount of easily accessible data, not least due to the success of the internet, breathed new life into AI research from around 1980. By 1997 at the latest, when the IBM AI Deep Blue defeated the then world chess champion Kasparov in a highly publicised man versus machine tournament, AI had once again arrived on the world stage. Large companies such as IBM, Google and Apple soon began to invest substantially in AI research. This resulted in applications such as Siri, IBM Watson, auto-correct features, automatic song recommendations from Spotify or the Roomba robot Hoover from iRobot.

This development has accelerated with the advent of ever-faster computers and ever-larger amounts of available data but has not been without its disappointments. Based on Watson, IBM has mastered speech recognition, for example, and impressively demonstrated this ability by winning the American Jeopardy show in 2011, but its AI system has disappointed expectations in other areas such as medicine.

AI fundamentals - past, present and future Ai

The next breakthrough came with a completely new learning approach for the AI algorithms – they are not supposed to make their decisions based on the broadest possible depth of data, but only receive the basic rules for a problem, and then find the best solutions themselves through iterative trials, coming a little closer to the best solution with each iteration.

A major advantage of this new approach, known as deep learning, is that it can be applied to any problem. And Google proved just how powerful it is with its AlphaZero system in 2017. It gave its system the basics of chess, had it trained and played 100 games against the then world chess computer champion Stockfish 7. Of these 100 games, AlphaZero won 28, drew 72 times and didn’t lose a single one. And AlphaZero only had to train for 4 hours to achieve this feat!

Since then, deep learning systems have learned how to recognise objects and faces in images, create their own images, and more recently videos, based on a simple description, to predict the folding of proteins, or even cancer diseases based on X-ray images, how to write advertising copy, or even entire articles and books, write computer code or compose music.

And since OpenAI, with access to vast amounts of information, released its AI ChatGPT 3.5 to the public in November 2022, this technology has finally entered every classroom, every office and probably even your home.

3) The Future

With a technology that is developing so quickly, it is of course difficult to predict what the future will bring. In any case, we will see rapid improvements in the areas where it is already in use today. The automatically generated images will soon be almost indistinguishable from photographs, and the AIs will create ever better and longer videos – soon probably complete movies, entirely according to the specifications of their viewers. They will take on more and more tasks in research, in companies, and we will find ourselves facing them more and more often when we communicate online. Language barriers will fall, everyone will have access to the best teachers in every field, and more and more of the decisions we make will be supported by AI.

We will have to learn to live closely with AI, whether privately or professionally. And we already have to prepare ourselves in both areas to adapt to an ever faster-changing environment.

AI fundamentals - future impact of AI

We are already feeling many of the consequences of this development today. AIs are influencing our behaviour on social media and we are struggling to distinguish real news from artificially created deep fakes in the case of images and videos. During a phone or video call, we can no longer be sure that we are dealing with a real person. Texts are increasingly being created automatically, which requires new approaches for copywriters, teachers and lawyers alike.

With ever-improving systems, we can expect more and more professions to change fundamentally because of AI. And this will increasingly involve complex and creative tasks. A recent study by the consulting firm Cognizant together with Oxford Economics estimates that 90% of jobs in the US will be disrupted by AI (https://www.cognizant.com/us/en/aem-i/generative-ai-economic-model-oxford-economics).

Looking further into the future, however, AI could have an even greater impact. There is currently a debate in scientific circles about whether or when AI could reach the full breadth of human intelligence – the so-called General Intelligence (AGI). The technology could then be used for any intellectual human task. Further development could lead to superintelligence (ASI), i.e. AI systems that surpass us in terms of intelligence in all fields. Some experts believe that this could happen when AIs themselves can create even better AIs and that this process will then accelerate exponentially. According to these experts, we would then have completely lost control of the technology and would be at the mercy of superior AI-“beings”.

Artificial General Intelligence

Artificial General Intelligence (AGI) refers to a type of artificial intelligence that has the ability to understand, learn, and apply knowledge across a wide range of tasks at a level of competence comparable to that of a human. Unlike narrow AI, which is designed to perform specific tasks with expertise, AGI can generalize its learning and reasoning capabilities to solve any problem, including those it has not been specifically programmed for. AGI encompasses a broad and flexible range of cognitive abilities, enabling it to perform any intellectual task that a human being can.

Artificial Superintelligence

Artificial Superintelligence (ASI) refers to a hypothetical AI that surpasses human intelligence across all fields, including creativity, general wisdom, and problem-solving. Unlike Artificial General Intelligence (AGI), which aims to match human cognitive abilities, ASI would be capable of exceeding the intellectual performance of the best human minds in virtually every discipline, from scientific research and invention to social interactions and emotional understanding. The concept of ASI raises both opportunities and significant ethical and safety concerns, as it could lead to unprecedented advances in technology, medicine, and science, but also poses existential risks if not properly controlled or aligned with human values and interests.

The Singularity

The Singularity, in the context of AI development, refers to a hypothetical future point at which artificial intelligence (AI) will have advanced to the point of creating machines that are smarter than human beings. This moment is expected to lead to exponential technological growth, resulting in unfathomable changes to human civilization. The concept suggests that post-Singularity, AI could improve itself autonomously at an ever-increasing rate, leading to the creation of machines with superhuman intelligence and abilities. The idea of the Singularity raises both excitement and concern, as it presents opportunities for solving humanity’s most pressing problems but also poses significant ethical, safety, and existential risks. The term is widely associated with futurists like Ray Kurzweil, who predict that this event could occur within the 21st century.

AI Hype – Why Now?

As already mentioned in the article, the latest development of AI has mainly been accelerated by increasing computing power, the availability of large amounts of data and the development of new AI concepts.

Ai fundamentals - AI revolution - Why now?

However, acceptance among the general public and within companies should not be underestimated. We are increasingly living in a digital world, communicating, working and buying online. We are also used to accessing new, even more convenient online services ever more quickly and expect individual and personalised services in every area. These are natural fields of application for AI technology, which is why we are embracing it with open arms.

Main Types of Current AI Systems

AI is a term that summarises many technologies. This includes, for example, machine learning, where decisions are made automatically but under human guidance, and as subtype of it deep learning, where the algorithm itself decides whether a prediction is right or wrong. Here is a list of the most important types of AI:

1) Machine Learning Systems

These AI systems learn from data, identifying patterns and making decisions with minimal human intervention. Machine Learning (ML) is the foundation of many modern AI applications, including image and speech recognition, medical diagnosis, and stock market trading. It includes subcategories such as Supervised Learning, Unsupervised Learning, and Reinforcement Learning.

2) Expert Systems

Designed to mimic the decision-making ability of a human expert, expert systems use predefined rules and knowledge to make inferences. They are used in specialized fields like medical diagnosis, engineering, finance, and more to provide advice, interpret data, or diagnose issues based on their vast database of knowledge.

3) Deep Learning

Deep Learning is a subset of machine learning that utilizes deep neural networks to model and understand complex patterns in large volumes of data. By mimicking the structure and function of the human brain through layers of artificial neurons, deep learning algorithms can automatically extract and learn features relevant to their tasks, such as image recognition or natural language processing. This technology has led to significant advancements in AI, enabling machines to perform a wide range of tasks with increasing accuracy and autonomy, without needing explicit instructions for feature extraction and interpretation.

4) Generative AI

This type of AI focuses on creating new content, such as images, text, music, and even video, that is similar to human-created content. Generative Adversarial Networks (GANs) are a popular approach, where two neural networks compete with each other to improve the quality and realism of the generated output. Applications include art creation, video game content generation, and more.

5) Natural Language Processing

NLP systems are designed to understand, interpret, and generate human language in a way that is valuable. They enable computers to perform tasks such as translation, sentiment analysis, and speech recognition. Chatbots and virtual assistants like Siri and Alexa are common applications of NLP.

6) Computer Vision

This AI technology enables machines to interpret and make decisions based on visual data. From recognizing faces in social media photos to autonomous vehicles interpreting the driving environment, computer vision systems are used in security, retail, healthcare, and many other industries.

7) Robotic Process Automation

RPA technologies allow for the automation of repetitive tasks usually performed by humans. By mimicking human interactions with software and applications, RPA can automate processes in various domains such as customer service, data entry, and more, enhancing efficiency and reducing errors.

8) Cognitive Computing

Aimed at simulating human thought processes in a computerized model, cognitive computing systems use self-learning algorithms that use data mining, pattern recognition, and natural language processing to mimic the human brain. The goal is to create automated IT systems capable of solving problems without human assistance.

Current limitations of AI

Despite the rapid advancements in AI, current systems still face several significant limitations that researchers and developers are actively working to overcome. These limitations include:

  • Generalization: AI systems, especially those based on narrow AI, struggle to generalize knowledge from one domain to another. They excel in the specific tasks they’re trained for but falter outside those parameters.
  • Data Dependency: AI models, particularly in machine learning and deep learning, require vast amounts of data to learn and make accurate predictions. The quality and diversity of this data are crucial; poor data quality can lead to biased or inaccurate outcomes.
  • Understanding Context: AI often lacks the ability to fully understand context or nuances in human language and interactions. This limitation is particularly evident in natural language processing applications, where the subtleties of human communication can be lost or misunderstood.
  • Creativity and Innovation: While AI can generate new content or solutions based on existing patterns, its ability to be truly creative or to innovate in the way humans do is limited. AI’s “creativity” is bounded by the data and algorithms it has been exposed to.
  • Ethical and Social Concerns: AI systems can inadvertently perpetuate biases present in their training data, leading to unfair or unethical outcomes. Additionally, the increasing automation facilitated by AI raises concerns about job displacement and the future of work.
  • Energy Consumption: Advanced AI models, especially those used in deep learning, require significant computational power and energy, raising concerns about their environmental impact.
  • Interpretability and Transparency: Many AI models, particularly deep learning models, are often described as “black boxes” because their decision-making processes are not easily interpretable by humans. This lack of transparency can be a barrier in critical applications where understanding AI’s reasoning is essential.

Addressing these limitations is a focus of ongoing research in AI, aiming to create more adaptable, reliable, and transparent AI systems that can work more harmoniously within human contexts and societies.

Applications of AI in Business

AI Fundamentals-Initial assessment of suitability of AI applications.

Sooner or later, AI will influence most areas of your company. The earlier a company engages with it, the better prepared it is for the change and the greater the chances of successfully utilising AI in a competitive environment. Most importantly, this will lay the foundations for the successful utilisation of this technology at an early stage. Even if the focus on these varies depending on the industry and area of application for AI, the following components should be taken into account:

  • Clear Strategy and Objectives: Before implementing AI, a company should have a clear understanding of its strategic goals and how AI can help achieve them. This involves identifying specific problems AI can solve or areas where it can add value.
  • Data Infrastructure: AI models require large amounts of data to learn and make accurate predictions. A robust data infrastructure that ensures the quality, accessibility, and security of this data is crucial. Companies need to invest in data collection, storage, and management systems.
  • Talent and Expertise: Having the right talent is essential for developing, deploying, and managing AI solutions. This includes not only AI researchers and data scientists but also domain experts who understand the company’s business context and can work alongside technical staff.
  • Technology Infrastructure: Effective AI deployment requires a suitable technology infrastructure, including hardware and software that can support AI development and integration into existing systems. Cloud computing resources and specialized hardware for AI processing may be necessary.
  • Ethical and Legal Considerations: Companies must consider the ethical implications of AI, including privacy, transparency, and fairness. Compliance with relevant laws and regulations governing data protection and AI use is also essential.
  • Change Management and Training: Successfully integrating AI into business processes often requires changes to workflows and job roles. Companies need to invest in training and change management to ensure employees can work effectively with AI systems.
  • Scalability and Maintenance: AI systems need to be scalable to handle increasing amounts of data and use cases. Regular maintenance and updates are also necessary to ensure they continue to perform well over time.
AI Fundamentals -What does AI mean for your company

Some well-known strategy consultancies are of the opinion that the learning effects and the development of the necessary infrastructure are reason enough for the introduction of AI, even if it is ultimately not economically viable. I do not agree with this view. AI often triggers great fears among employees and uncertainty among management. If a first AI project fails because its goals are set too high – or too naively – this can strengthen internal resistance for a long time and put the brakes on similar initiatives. If customers are also disappointed, the damage can be even greater. However, if a project is well prepared and successful, nothing stands in the way of AI-friendly further development.

Nevertheless, there are ways to gain initial experience and test success with relatively little effort and risk, for example by working with external companies that offer easy-to-configure AI platforms. In particular, this approach makes it possible to take the first steps without immediately building up a lot of AI expertise in the organisation.

ai fundamentals - deriving value from ai

When analysing the potential benefits of AI for a company – this will be the focus of a subsequent article – the possible areas of application should be compared with the characteristics of such systems: Do better predictions add substantial value ? Can the technology be well integrated into the existing infrastructure and organisation? Are there processes that can be easily automated? Does the technology fit into the company’s innovation strategy

Risks of AI and Ethical Considerations in Business

Your use of AI in the company, whether for internal processes or towards customers, is of course not without risks. These depend, among other things, on the type of AI technology that is used. As AI systems are usually dynamic and change over time – for example through repeated training with new data – appropriate risk management structures should be in place.

The main risks of AI systems in companies are often cited as privacy – i.e. the handling of confidential data, bias – i.e. distorted results, usually due to training data that is incomplete or unbalanced, and the lack of transparency in decisions generated by AI – especially in the case of self-learning algorithms such as deep learning. However, there are many other risks that should be considered, and as these depend on the company, area of application and technology, the following list will not be exhaustive:

  • Data Privacy and Security: AI systems rely heavily on data, raising concerns about data privacy breaches and unauthorized access. Sensitive information can be exposed if not properly protected.
  • Bias and Fairness: If the data used to train AI models is biased, the AI system’s decisions can also be biased, leading to unfair outcomes or discrimination. This can harm a company’s reputation and lead to legal issues. It is imperative to review the training data on a regular basis.
  • Lack of Transparency and Explainability: Many AI models, especially deep learning algorithms, are often seen as “black boxes” with decision-making processes that are not easily understood. This lack of transparency can lead to trust issues among users and stakeholders.
  • Dependency and Overreliance: Overreliance on AI systems can make a company vulnerable if those systems fail or if the data feeding them becomes corrupted. Critical decision-making processes might become too dependent on AI, potentially leading to oversight failures.
  • Regulatory and Compliance Risks: As governments and regulatory bodies introduce new regulations governing AI and data use, companies must ensure their AI systems comply. Non-compliance can result in legal penalties and damage to reputation.
  • Integration Challenges: Integrating AI systems with existing IT infrastructure can be complex and costly, potentially leading to operational disruptions and compatibility issues.
  • Operational Risks: AI models can perform poorly if they encounter data or situations that significantly differ from their training data, leading to incorrect predictions or decisions that can affect business operations.
  • Intellectual Property Risks: When using third-party AI models or algorithms, companies must navigate intellectual property rights, licensing, and potential restrictions on use, which can impact innovation and competitive advantage.
  • Economic and Financial Risks: Investments in AI systems can be significant, with no guaranteed return. Misjudgments in the application of AI can lead to financial losses and wasted resources.

In addition, the use of AI also requires an awareness of ethical aspects towards employees, customers and the environment. Here are the most important ones, whereby the first 3 are repeated for the sake of completeness:

  • Privacy: AI systems often process vast amounts of personal data, raising concerns about user privacy. Businesses must ensure that data is collected, stored, and used in a manner that respects privacy norms and complies with data protection regulations.
  • Bias and Fairness: AI algorithms can perpetuate or even exacerbate biases present in their training data, leading to unfair treatment of individuals or groups. Ensuring AI systems are developed and deployed in a way that minimizes bias and promotes fairness is a critical ethical concern.
  • Transparency and Explainability: There’s a growing demand for AI systems to be transparent and their decisions explainable, especially in critical applications affecting people’s lives. Businesses need to balance the complexity of AI models with the need for them to be understandable to users and stakeholders.
  • Accountability and Responsibility: Determining accountability for the decisions made by AI systems can be challenging. It’s important for businesses to establish clear guidelines on responsibility, especially in cases where AI-driven decisions may have significant consequences.
  • Security: AI systems are vulnerable to various security threats, including data breaches and adversarial attacks. Ensuring the security of AI systems to protect sensitive data and infrastructure is an essential ethical and operational consideration.
  • Job Displacement: The automation of tasks previously performed by humans raises concerns about job displacement and the future of work. Businesses should consider the social implications of deploying AI and explore ways to mitigate negative impacts on employment.
  • Informed Consent: When using AI in contexts that involve human subjects, such as customers or employees, obtaining informed consent is crucial. Individuals should be aware of how AI is being used and the implications for their data and privacy.
  • Societal Impact: Beyond immediate business concerns, companies should consider the broader impact of their AI applications on society. This includes the potential to reinforce societal inequalities or impact democratic processes.
  • Environmental Impact: The energy consumption required for training large AI models has a significant environmental footprint. Ethical AI use involves considering and mitigating these environmental impacts.

Addressing these ethical considerations requires a multifaceted approach, including ethical guidelines, stakeholder engagement, transparency in AI development and deployment processes, and adherence to relevant laws and regulations. It also involves a commitment to continuous learning and adaptation as AI technologies and their societal implications evolve.

Where to Start?

As urgent as it is for companies to address this topic, the task can be overwhelming. Getting insights from articles like this one – and those that follow – is certainly good advice. To earn a certificate in AI from one of the many providers is certainly time well spent as well, if you have it.

From a business development perspective, however, it is better to work with experts – like us. We have designed a special offer for this, which provides a lot of hands-on knowledge within 7 days and sets the most important foundations for a robust AI strategy, based on real-world experience:

It should also be mentioned that any use of AI stands or falls with the quality of the available data. The development of a robust data strategy is therefore usually the first step and should be tackled independently of the use of AI. Of course, we are happy to help with this as well.

Conclusion

This article has looked at the beginnings and development of AI, given an overview of the current state of the technology and ventured a look into the future. Even though the impact on society as a whole is considerable, we have focussed here on the impact for companies, on the opportunities, but also on the risks that need to be considered. And finally, we have emphasised the importance of dealing with this topic at an early stage.

AI is part of our society and economy, and will remain so forever. Only companies that embrace this will be able to survive in the long term.

To quote an unknown source, the best time to start was yesterday, the second best time is today!

AI fundamentals - ask Adam Wisniewski

Related Posts

SOAR Analysis Template Step-by-Step Guide for Strategic Planning

In strategic planning, organizations are continually seeking innovative approaches beyond the traditional1

View Full article