Seven things to know about responsible AI

Artificial Intelligence is rapidly changing our world. Whether it’s ChatGPT The new one Bingour recently announced AI-powered search experiencethere is much excitement about the potential benefits.

But with all the excitement, there are naturally questions, concerns, and excitement about the latest advances in technology, especially to ensure that AI is used responsibly and ethically. Microsoft’s Chief AI Officer Natasha Crampton was in the UK to meet with policymakers, members of civil society and the tech community and share more about Microsoft’s approach.

We spoke to Natasha to understand how her team is working to ensure a responsible approach to AI development and deployment in light of this paradigm shift in the way we use technology. Here are seven key insights Natasha shared with us.

1. Microsoft has a dedicated Office of Responsible AI.

“In the year We have been working hard on these issues since 2017, after establishing our research-led Ether Committee (Ether is an acronym for AI, Ethics and Effects in Engineering and Research). It was here that we began to delve deeper into what these issues really mean to the world. From this, we adopted a set of principles in 2018 to guide our work.

The Office of Responsible AI was established in 2019 to ensure that we have the same holistic approach to responsible AI as we do to privacy, accessibility and security. Since then, we’ve been refining our practice, spending a lot of time figuring out what a principle like accountability means in practice.

We can then provide concrete guidance on how engineering teams can meet these principles, and share what we’ve learned with our customers and the wider community.

2. Responsibility is a key part of AI design – it is not an afterthought

“In the summer of 2022, we got an exciting new model from OpenAI. We immediately assembled a team of testers and had them examine the raw model to understand what its capabilities and limitations were.

Insights from this research helped Microsoft to think about what the actual reductions might be when we combine this model with the power of web search. Constantly improving their models, OpenAI has helped them try to bake more security into them.

We built new test pipelines where we considered the potential vulnerability of the model in the context of web search. In order to better understand some of the main challenges associated with this technology, we developed systematic measurement approaches – one example is known as ‘illusion’, where the model may contain facts that are not actually true.

He wanted to know how we could measure them in November and then better reduce them over time. We designed this product with responsible AI controls, so they are an intrinsic part of the product. I’m proud of the way the entire responsible AI ecosystem has come together to work on it.

3. Microsoft is working on responsiveness in search results.

“Illusions are generally a well-known issue in large language models. The main way Microsoft captures them in the Bing product is to ensure that the model’s results are based on search results.

This means that the response to the user’s question is based on the highest quality content from the web, and we provide links to websites where users can learn more.

Bing ranks web search content by heavily weighted attributes such as relevance, quality, and credibility and freshness. We consider grounded responses from the new Bing responses, in which claims are supported by data contained in input sources such as query web search results, fact-checked data from Bing’s knowledge base, and, chat experience, recent conversation history from a given conversation. Unsubstantiated responses are those for which claims are not based on input sources.

When we invite a small group of users to try the new Bing, we know there will be new challenges, so we’ve designed the release strategy to be incremental so we can learn from early users. We are grateful for those lessons as they help us make the product even stronger. We have introduced new simplifications in this process, and will continue to develop our approach.

4. Microsoft’s Responsible AI Standard is intended for use by everyone.

“In June 2022, we decided to publish the Responsible AI standard. We don’t normally make our internal standards public, but we believe it’s important to share what we’ve learned in this context, and help our customers and partners navigate what can sometimes be new terrain for them. As for us.

As we build tools at Microsoft to identify, measure, and mitigate responsible AI challenges, we invite those tools to the Azure Machine Learning (ML) development platform so our customers can use them for their own benefit.

For some of our new products built on OpenAI, we’ve developed a security system so that our customers can leverage our innovation and learning, as opposed to building all this technology themselves from scratch. We want to ensure our customers and partners are empowered to make responsible deployment decisions.

5. Diverse groups and perspectives are key to ensuring responsible AI.

“Working on responsible AI is incredibly versatile, and I love that. I work with researchers like the group. Microsoft UK Research Lab in Cambridge, engineers and policy makers. Applying diverse perspectives to our work is critical to moving forward responsibly.

Working with many people across Microsoft, we leverage the full strength of our responsible AI ecosystem in building these products. It was really exciting that our cross-teams got to the point where we really understood each other’s language. It took time to get there, but now we can work toward our common goals together.

But Microsoft people can’t be the only ones making all the decisions to build this technology. We want to hear outside perspectives on what we do and how we can do things differently. Whether through user research or ongoing discussions with civil society groups, it is important to bring diverse people’s everyday experiences into our work. It’s something we should always be committed to because we can’t build technology that serves the world until we have open conversations with the people who use it and hear the impact it has on their lives.

6. AI is a technology developed by people for people

“At Microsoft, our mission is to empower every person and every organization on the planet to achieve more. That means we make sure we’re building technology by people, for people. We shouldn’t see technology as a tool to amplify human potential, not as a substitute.”

On a personal level, AI helps me deal with overwhelming data. One of my jobs is to keep track of all the AI ​​developments and help Microsoft develop workplaces. Being able to use technology to quickly digest multiple policy documents allowed me to ask a series of questions to the right people.

7. We are currently at the frontier – but responsible AI is the work of eternity.

“One of the exciting things about this modern technology is that we’re really on the frontier. Naturally, there are various issues in development that we are looking at for the first time, but we have been building a responsible AI operation for six years.

There are still many research questions for which we know the right questions, but we don’t have the right answers in all cases. We can constantly look at these angles, ask tough questions, and build patterns and answers over time.

What makes our responsible AI ecosystem at Microsoft so strong is that we combine the best of research, policy and engineering. It’s a three-pronged approach that helps us look around the corner and anticipate what’s coming next. It’s an exciting time in technology and I’m very proud of the work my team is doing to responsibly bring this next generation of AI tools and services to the world.

Ethical AI integration: 3 tips to get started

You’ve seen the technology, you want to try it – but how can you make sure responsible AI is part of your strategy? Here are Natasha’s top three tips

  1. Think carefully about your use case. Ask yourself, what are the benefits you are trying to secure? What are the potential pitfalls you are trying to avoid? An Impact assessment It can be a very important step in developing your early product design.
  2. Gather a diverse team to help test your product Before release and continuously. As techniques Red-team It helps to push the boundaries of your systems and see how effective your protections are.
  3. be Committed to continuous learning and improvement.. An incremental release strategy will help you learn and adapt quickly. Make sure you have strong feedback channels and resources for continuous improvement. Use resources that reflect best practices whenever possible.

To learn more: There are many resources available at Microsoft, including tools, guides, and assessment templates. Responsible AI principle center To help you conduct AI integration ethically.

Tags: AI, Ethics, Responsible AI

We offer you some site tools and assistance to get the best result in daily life by taking advantage of simple experiences