top of page

It is time to take action! Join our Tribe of Changemakers. Sign up for Virtual Conversations! 

Below we explore the issues surrounding Artificial Intelligence(AI) today .

AI for What Purpose?

Bias and Fairness in AI Algorithms

While current artificial intelligence algorithms may be limited to learning a single task, the technology’s underlying principles and techniques are applicable to a surprisingly wide range of uses. AI systems risk exacerbating existing inequities in consequential and damaging ways.

 

Indeed, almost every sector of the economy and society has been affected by AI - or will be soon. Given this broad applicability, and the current shortage of AI-related talent, it is necessary to consider how we should develop and use this new tool to its maximum positive benefit.

 

We should also consider whether some AI systems create such a high risk of potential misuse that they should not be allowed at all. Facial recognition, for example, is one area of AI that has come under particularly intense public scrutiny, both because of related privacy concerns and due to the technology’s potential use as a tool of oppression; it therefore serves as a particularly thorny test case for when and how a particular area of AI both can and should be shut down entirely, and whether it is possible to use such technology responsibly and benevolently.

 

In other cases, challenges related to AI lie not with the broad technology itself but with its specific use. Algorithms applied within the criminal justice system, for example, have come under strong criticism - as they not only have potentially huge impacts on individuals’ lives, but are also subject to the deeply-embedded biases and historical inequities reflected in the training data and human developers that inform them.

 

Even among less controversial uses of AI there remains the question of how to best leverage scarce resources. A huge portion of AI-related talent, for example, has been directed at the development of autonomous vehicles and other private, for-profit company endeavors, and military applications - leaving fewer capable people dedicated to deploying AI for the common good.

 

As we foster a technology that many believe has the potential to reshape society, we need to find new ways for it to represent the interests of many different stakeholders, and to play a positive role in our future. We should also consider whether some applications of the technology should be banned entirely.

The real-world data informing systems reflect the inequalities and biases of the real world. Artificial Intelligence has the potential to encode and exacerbate biases by reflecting the assumptions, interests, and world views of its developers and users.

Countless examples exist of AI systems amplifying the bias of its training data, resulting in the use of racist language or discriminatory recommendations. To address this, researchers have sought to mathematically define and measure fairness - only to realize there are many ways to define what is fair, and that it is often impossible to satisfy all fairness measures.

 

In addition, machine learning - currently the most common form of AI - works by looking for patterns in real-world examples (“training data”), which can lead to problems in multiple ways. For example, the training data may omit certain types of people or be gathered within a narrow cultural context. In one instance, a tool that was designed to sharpen the blurry images of faces was found to consistently turn people with darker skin tones white, most infamously a photo of former US President Barack Obama. The tool has been criticized for its reliance on training data predominantly made up of white faces, as well as for failing to anticipate or test for such issues.

 

Real-world training data includes all of the inequalities, biases, and unjust realities of the real world - and machine learning systems are not capable of identifying or fixing unjust processes. Unfortunately, simply removing information on race or gender from the data does not solve this problem. That is because AI systems can use “proxy variables,” or information in the data that correlate with omitted social groups, to nonetheless treat different groups differently.

 

Attempts to encode fairness have therefore led to the identification of related tradeoffs that are present in any society but often not fully acknowledged. Before automating fairness, developers should assess AI systems for multiple types of fairness, identify the key factors leading to unfair outcomes, and consider alternative approaches. Efforts to develop truly “fair” AI do not resolve these tradeoffs, and tend to satisfy one fairness measure at the expense of another. Simply automating these processes misses opportunities for gaining a broader understanding and instigating change. This in turn could push us towards not only fairer, but also more innovative systems that challenge the status quo.

AI and COVID-19

KC GOAL 4.png
KIKAO 1.png
KIKAO 1.png
KC GOAL 4.png

COVID-19 has had contradictory effects on artificial intelligence. The pandemic may have lead to a greater appreciation of the value of human interaction. The pandemic has highlighted the unusual nature of AI as both a cutting-edge technology, and one that relies on the status quo, for example.

 

Innovative AI systems have played a role in addressing the health crisis by tracking its spread, identifying potential drug therapies, and sifting through thousands of published papers on the topic for insights. At the same time, the pandemic poses fundamental challenges to AI systems. The version of AI now in common use, machine learning, relies on historical training data and assumes that the patterns identified in that data are still relevant. However, during unprecedented situations, this type of assumption can be problematic.

 However, we must not push aside the principles that govern AI use in our rush to address the crisis. Contact tracing apps, for example, have raised concerns about the collection of sensitive personal health and location data, and while it may be tempting to make exceptions during a crisis it may prove challenging to close these doors once they are opened. There has also been growing concern that the pandemic will accelerate the replacement of human workers with AI.

While we might expect greater automation in situations where safety and distancing measures for a workplace are costly or infeasible, high levels of pandemic-related unemployment may actually reduce the cost of human labour and therefore bolster hiring in other areas.

 

AI is still a relatively new technology, and its adoption requires investment and risk that companies in a crisis mode may not be ready for. And, many of the jobs most affected by the pandemic require face-to-face human interaction - the skill AI is least able to learn. It is possible that the pandemic will therefore lead to a greater appreciation of the value of human interaction, and new ideas about how to preserve it in the future.

 

Approaches to addressing this problem include using human expertise to recognize the places where the underlying rules of the process still apply, and collecting new training data that more accurately reflect the changed conditions. As the pandemic lingers, we should be able to accumulate enough real-world examples of its impact to underpin AI systems that can do things like detect COVID-19 in lung scans, or automatically filter out harmful misinformation about the pandemic.

AI and the Future of Jobs

Operationalizing Responsible AI

Is artificial intelligence coming for your job? Preparing for a future without human work will require more than addressing basic financial needs. Actively involving workers in the development, adoption, and implementation of the technology can result in systems that are more practical, innovative, and effective.

 

While some reports suggest nearly half of all jobs may be automated, other analyses note two important nuances. The first is that AI creates as well as replaces jobs. AI systems still need humans to develop them, handle nonroutine cases, provide a human touch, and monitor for failures.

 

New technologies can also sometimes create entirely novel jobs - like social media influencer. A second nuance is that - at least for the foreseeable future - AI systems will only be able take over specific tasks rather than entire jobs.

One report estimated what while 60% of all jobs have at least some tasks that could be automated, only 5% are under threat of full automation. And, as AI excels at routine tasks, it can free up humans for more interesting challenges. This augmentation-rather-than-automation approach offers the best opportunities for not only preserving employment but also ensuring effective and valuable AI.

 

Even with an augmentation approach, however, AI systems will result in potentially significant job disruptions - and call for a rethinking of education, employment, and policy systems. While technology skills would seem a worthwhile investment focus, there is also a need for general skills that can improve employment adaptability - such as critical thinking, and the skills that AI struggles with replicating such as creativity, human touch, and emotional intelligence.

 

While such a program might address financial need, truly preparing for a future without work requires a deeper reinvention of human identity. It is not certain whether human work will eventually disappear, but two features of the current situation are particularly troubling. The first is prevalent wealth inequality both within and between countries. If AI does lead to widespread job displacement, extreme inequality could lead to disastrous outcomes. The second is the central role that work plays as a source of personal worth and meaning in many societies. One popular proposed solution to a future without work is a universal basic income, where people receive regular payment regardless of employment.

The challenge now is how to best put central set of tenets: respect for privacy, transparency, explainability, human control, and mitigating bias into broad practice and enforce their use - as there is an increasing awareness that considerable barriers still exist when it comes to actually operationalizing AI principles. Ethical principles can have very different meanings depending on location and cultural context. There has been a growing recognition of the potentially negative impact of artificial intelligence on society.

 

Survey results published by the Center for the Governance of AI in 2019 suggested that more Americans think high-level machine intelligence will be harmful than think it will be beneficial to humanity, for example. In response to sentiments like this, over a relatively short period of time more than 160 different sets of principles for ethical AI have been developed around the world. While these differ in terms of emphasis and cultural context, they all point to a growing consensus around a central set of tenets: respect for privacy, transparency, explainability, human control, and mitigating bias.

 

Many of the principles are very general, for example, requiring considerable work to translate them into day-to-day practices, and some of the most important related questions regarding accountability, auditing, and liability remain unanswered. Some of the principles may come into conflict with one another during implementation. And, while there may be general agreement on the principles in name, their specific interpretation and meaning will vary (sometimes considerably) according to context and culture. As a result, there is a critical need for further international cooperation on developing ways to operationalize ethical principles of AI that are mutually beneficial and constructive. While many companies and government leaders say that they want to ensure responsible development and behavior, without easy-to-use solutions and clear guidelines the effort and cost required to operationalize effectively will discourage action.

 

Lawmakers should use both informal and formal means to hold these organizations accountable for their use of AI, while promoting responsible practices and uses of the technology. As we seek to facilitate the guidelines, we also need to increase the cost of inaction - every organization should be expected to not only endorse the responsible use of AI, but to also provide clear evidence that their own practices match their rhetoric.

Can AI Overcome its Limitations?

The Geopolitical Impacts of AI

KC GOAL 4.png

Given the related publicity and hype, one might be forgiven for believing that artificial intelligence is on the verge of surpassing human intelligence - or even taking over the world. Estimates for when truly agile and adaptable AI might emerge range from 10 years to never.

 

While AI may be faster than a human and better at optimizing, it can only learn one specific task. However, the reality is that current AI falls far short of true intelligence. The majority in use today is some form of machine learning, which works by looking for patterns in real-world examples, or “training data.” When a machine learning system is deployed, it uses patterns identified in the training data to predict or make decisions.

 

AI systems can fail in situations characterized by sudden change - such as the COVID-19 pandemic. If the situation changes and no longer matches the training data, then the algorithm must be retrained. It is not capable of developing general concepts to carry from one situation to another, which is a key element of “artificial general intelligence” - which does not currently exist (it is unclear if it ever will). Meanwhile the limitations of machine learning can have real-world consequences.

 

AI systems need large amounts of data to learn a task, compared with a child who can learn to recognize a dog after seeing a few examples and drawing on past experiences and abstract concepts. Sometimes, AI systems can struggle with novel situations that seem relatively simple - such as what to do if attempting a penalty kick in a soccer match and the goalkeeper simply falls down.

 

The public perception of AI as cutting-edge and disruptive stems not from its inherent qualities, but rather from the human ingenuity displayed in using this tool. Current AI systems are also only effective when deployed under conditions that match their training data. They can optimize under these conditions, but are not capable of envisioning new ways to undertake the same task, or of predicting the outcomes of fundamental changes.

 

Researchers are looking for ways to achieve artificial general intelligence, though their path is not clear - and machine learning may end up being a dead end. There is broad disagreement about when we might achieve artificial general intelligence, with estimates ranging from 10 years to never.

According to a report published by PwC, North America and China are likely to be home to 70% of the global economic impact of AI, with other developed countries in Europe and Asia capturing much of the rest (North America is expected to see as much as a 14% GDP boost from AI by the year 2030, while China is expected to see a GDP boost of as much as 26% by that point).

 

Artificial intelligence has the potential to deepen divides both within and between countries, as a result of the distribution of related benefits and know-how. The geographical concentration of the technology could aggravate international rivalries.

This situation risks spawning both a competitive race between countries for AI dominance, and the widening of a knowledge gap that will leave much of the rest of the world even further behind. AI competition entails not only battles over talent and computing infrastructure, but also over access to - and control of - data. The ability of data to flow across borders means that early movers in AI can gain global influence that may make it difficult for initiatives elsewhere to catch up.

 

A second geopolitical concern related to AI concerns the role the technology can play - both unintentionally and intentionally - in exacerbating political divisions and polarizing societies. There is a growing awareness of the ways social media can contribute to polarization, and AI-driven recommendation algorithms play a significant role. In addition to potentially keeping users trapped in bubbles of content that match their own worldview, thereby limiting access to other perspectives and possibly hardening misperceptions, these systems can have the often-unanticipated effect of actively pushing users towards more extreme content..

 

AI is also frequently being intentionally used to manipulate and polarize viewpoints, most notably through the creation of “deepfake” video and audio content designed to deceive the public and denigrate public figures (experts fear that an ability to fake large-scale historical events could one day irreparably damage the public’s trust in what it sees).

 

For example, YouTube has drawn a significant amount of criticism for the ways in which the video streaming service’s recommendation algorithm can nudge users in the direction of extremist political and views and conspiracy theories based on their browsing behavior.

AI, Diversity, and Inclusion

KC GOAL 4.png

 

Artificial Intelligence tools are often promoted as an opportunity to improve diversity and inclusion. One way to avoid problems with the technology is to create more diverse development teams. However, the news is full of stories about AI systems going horribly awry in ways that have the opposite effect.

 

Numerous examples exist of AI systems that are problematic because they reflect the world views and assumptions of their creators. While diverse teams are not a guaranteed fix, they reduce the odds that diversity and inclusion impacts will be overlooked. Diverse AI talent also broadens the innovation landscape more generally in ways that can push the technology forward on all fronts.

 

Some aspects of AI - such as its large scale, automated processes, and data-based decisions - could in principle expand access to resources and foster fairer treatment. Yet these same features also risk creating only the illusion of objectivity, while they encode inequality and injustice on a vast scale - or are used to further oppress disadvantaged groups.

 While AI tools do have the potential to improve diversity and inclusion, that power comes not from AI itself but rather from their creators. Current AI is not capable of abstract reasoning, nor can it predict the impacts of major change, necessitating human creators who understand why a current system may be problematic - and how AI might improve it.

Similarly, the problematic impacts of AI on diversity and inclusion stem not only from issues related to data and algorithm design, but also from their creators misreading and oversimplifying social systems - and not anticipating unintended consequences.

For example, a scandal erupted in the United Kingdom in 2020 related to an algorithm used to grade crucial university entrance exams that undercut the scores of less-affluent students (though it was not a full AI system) - illustrating how algorithm creators may not anticipate how their tool will reinforce existing inequalities.

 

Consideration of the diversity and inclusion impacts of AI systems should be incorporated into the design and evaluation of all AI tools, as well as their regulation and oversight. In addition, subject matter experts are necessary to understand the context in which an AI system will be deployed. Perhaps the most critical need is for AI development teams themselves to become more diverse - through changes in access to education and resources, hiring practices, and organizational cultures.

ARTIFICIAL INTELLIGENCE

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. A number of them were not sure what it was or how it would affect their particular companies.

 

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision-making. However, Artificial Intelligence is rife with contradictions. It is a powerful tool that is also surprisingly limited in terms of its current capabilities. And, while it has the potential to improve human existence, at the same time it threatens to deepen social divides and put millions of people out of work. While its inner workings are highly technical, the non-technical among us can and should understand the basic principles of how it works - and the concerns that it raises.


As the influence and impact of AI spread, it will be critical to involve people and experts from the most diverse backgrounds possible in guiding this technology in ways that enhance human capabilities and lead to positive outcomes.

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

 

This is as expressed in the Artificial Intelligence And Life In 2030 report released in 2o16 by The Standing Committee at Stanford University.

bottom of page