Cathy Baxter, lead architect of ethical AI practice at Salesforce, said AI developers must move quickly to develop and deploy systems that address algorithmic bias. In an interview with ZDNET, Baxter stressed the need for diverse representation in datasets and user research to ensure fair and unbiased AI systems. It also highlighted the importance of making AI systems transparent, understandable, and accountable while protecting individual privacy. Baxter stresses the need for collaboration across sectors, such as the model used by the National Institute of Standards and Technology (NIST), so that we can develop robust and secure AI systems that benefit everyone.
One of the fundamental questions in AI ethics is ensuring that AI systems are developed and deployed without reinforcing existing social biases or creating new ones. To achieve this, Baxter stressed the importance of asking who benefits and who pays for AI technology. It is critical to consider the datasets used and ensure that they represent the voices of all. Thoroughness in the development process and identification of potential harms through user research is also essential.
also: ChatGPT’s intelligence is zero, but it’s a revolution in utility, says an expert in artificial intelligence
“This is one of the fundamental questions that we have to discuss,” Baxter said. “Women of color, in particular, have been asking this question and doing research in this area for years now. And I’m glad to see so many people talking about this, especially with the use of generative AI. But the things we need to ask are, fundamentally, Who benefits and who pays for this technology. Whose votes are included?”
Social bias can be instilled into AI systems through the data sets used to train them. Unrepresentative data sets that contain biases, such as image data sets that are single-ethnic or that lack cultural differentiation, can lead to biased AI systems. Moreover, applying AI systems unevenly in society can perpetuate existing stereotypes.
To make AI systems transparent and understandable to the average person, prioritizing explainability during the development process is key. Technologies such as “brainstorming” can help AI systems show their work and make decision-making more understandable. User research is also vital to ensuring that explanations are clear and users can identify uncertainties in AI-generated content.
also: Artificial intelligence can automate 25% of all jobs. Here’s more (and less) risk
Protecting individual privacy and ensuring the responsible use of AI requires transparency and consent. Salesforce follows guidelines for responsible generative AI, which include respecting the source of the data and using customer data only with consent. Allowing users to opt-in, opt-out, or control the use of their data is critical to privacy.
“We only use customer data when we have their consent,” Baxter said. “Being transparent about when someone’s data is being used, letting them opt in, letting them come back and say when they no longer want their data included is really important.”
As competition for innovation in the field of generative AI intensifies, maintaining human control and autonomy over increasingly autonomous AI systems is more important than ever. Enabling users to make informed decisions about the use of AI-generated content and keeping a human presence in the loop can help maintain control.
Ensuring that AI systems are safe, reliable and usable is crucial; Industry-wide collaboration is vital to making this happen. Baxter praised the AI risk management framework created by NIST, which involved more than 240 experts from various sectors. This collaborative approach provides a common language and framework for identifying risks and sharing solutions.
Failure to address these ethical AI issues can have serious consequences, as seen in cases of wrongful arrest for facial recognition errors or the production of malicious images. Investing in preventative measures and focusing on what is here now and not just on potential future harms can help mitigate these issues and ensure the responsible development and use of AI systems.
also: How does ChatGPT work?
While the future of artificial intelligence and the possibility of artificial general intelligence are interesting topics, Baxter stresses the importance of focusing on the present. Ensuring the responsible use of AI and addressing social biases today will better prepare society for advances in AI in the future. By investing in ethical AI practices and collaborating across industries, we can help create a safer, more inclusive future for AI technology.
“I think the schedule is very important,” Baxter said. “We really have to invest in the here and now and create this muscle memory, create these resources, create the regulations that allow us to keep moving forward but do it safely.”
#artificial #intelligence #ethicist #rise #today #magnify #social #problems #dont #act