Responsibility & Safety
Exploring the promise and risks of a future with more capable AI
Imagine a future where we interact regularly with a range of advanced artificial intelligence (AI) assistants — and where millions of assistants interact with each other on our behalf. These experiences and interactions may soon become part of our everyday reality.
General-purpose foundation models are paving the way for increasingly advanced AI assistants. Capable of planning and performing a wide range of actions in line with a person’s aims, they could add immense value to people’s lives and to society, serving as creative partners, research analysts, educational tutors, life planners and more.
They could also bring about a new phase of human interaction with AI. This is why it’s so important to think proactively about what this world could look like, and to help steer responsible decision-making and beneficial outcomes ahead of time.
Our new paper is the first systematic treatment of the ethical and societal questions that advanced AI assistants raise for users, developers and the societies they’re integrated into, and provides significant new insights into the potential impact of this technology.
We cover topics such as value alignment, safety and misuse, the impact on the economy, the environment, the information sphere, access and opportunity and more.
This is the result of one of our largest ethics foresight projects to date. Bringing together a wide range of experts, we examined and mapped the new technical and moral landscape of a future populated by AI assistants, and characterized the opportunities and risks society might face. Here we outline some of our key takeaways.
A profound impact on users and society
Advanced AI assistants could have a profound impact on users and society, and be integrated into most aspects of people’s lives. For example, people may ask them to book holidays, manage social time or perform other life tasks. If deployed at scale, AI assistants could impact the way people approach work, education, creative projects, hobbies and social interaction.
Over time, AI assistants could also influence the goals people pursue and their path of personal development through the information and advice assistants give and the actions they take. Ultimately, this raises important questions about how people interact with this technology and how it can best support their goals and aspirations.
Human alignment is essential
AI assistants will likely have a significant level of autonomy for planning and performing sequences of tasks across a range of domains. Because of this, AI assistants present novel challenges around safety, alignment and misuse.
With more autonomy comes greater risk of accidents caused by unclear or misinterpreted instructions, and greater risk of assistants taking actions that are misaligned with the user’s values and interests.
More autonomous AI assistants may also enable high-impact forms of misuse, like spreading misinformation or engaging in cyber attacks. To address these potential risks, we argue that limits must be set on this technology, and that the values of advanced AI assistants must better align to human values and be compatible with wider societal ideals and standards.
Communicating in natural language
Able to fluidly communicate using natural language, the written output and voices of advanced AI assistants may become hard to distinguish from those of humans.
This development opens up a complex set of questions around trust, privacy, anthropomorphism and appropriate human relationships with AI: How can we make sure users can reliably identify AI assistants and stay in control of their interactions with them? What can be done to ensure users aren’t unduly influenced or misled over time?
Safeguards, such as those around privacy, need to be put in place to address these risks. Importantly, people’s relationships with AI assistants must preserve the user’s autonomy, support their ability to flourish and not rely on emotional or material dependence.
Cooperating and coordinating to meet human preferences
If this technology becomes widely available and deployed at scale, advanced AI assistants will need to interact with each other, with users and non-users alike. To help avoid collective action problems, these assistants must be able to cooperate successfully.
For example, thousands of assistants might try to book the same service for their users at the same time — potentially crashing the system. In an ideal scenario, these AI assistants would instead coordinate on behalf of human users and the service providers involved to discover common ground that better meets different people’s preferences and needs.
Given how useful this technology may become, it’s also important that no one is excluded. AI assistants should be broadly accessible and designed with the needs of different users and non-users in mind.
More evaluations and foresight are needed
AI assistants could display novel capabilities and use tools in new ways that are challenging to foresee, making it hard to anticipate the risks associated with their deployment. To help manage such risks, we need to engage in foresight practices that are based on comprehensive tests and evaluations.
Our previous research on evaluating social and ethical risks from generative AI identified some of the gaps in traditional model evaluation methods and we encourage much more research in this space.
For instance, comprehensive evaluations that address the effects of both human-computer interactions and the wider effects on society could help researchers understand how AI assistants interact with users, non-users and society as part of a broader network. In turn, these insights could inform better mitigations and responsible decision-making.
Building the future we want
We may be facing a new era of technological and societal transformation inspired by the development of advanced AI assistants. The choices we make today, as researchers, developers, policymakers and members of the public will guide how this technology develops and is deployed across society.
We hope that our paper will function as a springboard for further coordination and cooperation to collectively shape the kind of beneficial AI assistants we’d all like to see in the world.
Source link
lol