Warning existing gender biases may be programmed into artificial intelligence
Generative artificial intelligence has experts worried existing gender biases will be programmed into the systems, putting women and children at risk without governments implementing adequate policy.
Generative artificial intelligence – such as ChatGPT – is a type of system that is capable of generating text, images or other media in response to prompts.
Artificial Intelligence and Metaverse specialist Dr Catriona Wallace told the Forbes Women in Business Summit this week her biggest concern as generative artificial intelligence takes hold of the world is the lack of responsibility by tech giants.
“The world changed for us in November last year when generative AI was launched and now we see more and more organisations using this extraordinary technology,” she said.
“The challenge is still in the AI sector 9 in 10 jobs are held by men and 1 in 10 are held by women.”
With this gender disparity, Wallace is concerned digital worlds will look exactly like the current physical world with imbalances in gender, race, and sex.
“There is still absolutely the most likely chance we will be hardcoding society’s existing biases towards women and minorities into the machines that are running our world,” she said.
“There is little if no regulation to do with AI because the tech is so far ahead of the government and policymakers,” she said.
Wallace emphasised the need for government policy to prevent this from happening in artificial reality.
“Women, children and minorities are still at significant risk from AI,” she said.
Wallace warned tech companies aren’t showing consideration towards ethics in artificial reality as more and more programs from ChatGPT to Apple’s upcoming augmented reality programs and so forth come onto the market.
“Tech giants are running the show and the world,” she said.
“None of the tech giants, in my opinion, are demonstrating that they have ethics and responsibility in mind because it is countered to their business model of profit.”
Machines Behaving Badly author and artificial intelligence expert Toby Walsh said programming societal biases into the technology is a deep fundamental problem.
“Much of artificial intelligence is based on machine learning and is based on data,” he told 9news.com.au.
“Data is historical, it reflects the past and the society it captured and there are lots of biases in that data.”
He warned if tech companies aren’t careful, they will perpetuate those biases.
Walsh added it isn’t just gender biases that can be programmed into these artificial technologies but access for people with disabilities is also at risk.
“Unless you put time and effort and money, these tools won’t be accessible to that part of the population,” he said.
So what is the solution to this ethical problem?
Walsh said it isn’t just government regulation, whereby there is already some regulation about gender discrimination, but it is about tech companies having a diverse team working on the programs.
He said it is also about having ethical teams to oversee any potential biases and for programmers to be aware of their own biases and look for these popping up in the tech.
“The problem is systems can continue the biases that are present among humans,” he said.
He said tech companies should look at taking the time and investment to program systems to be accessible for all minority groups for the financial long-term interest.
“In the long term, you can see companies that roll out artificial intelligence in a responsible way then consumers will see it as a competitive advantage,” he said.
Walsh added tech giants need to be transparent about progress in artificial intelligence too as there is no longer much “open about open air” to push back against hard-coding biases in the systems.