Skip to main content

Colin Thornton, Chief Commercial Officer at Turrito unpacks the essential considerations that have to govern the use of artificial intelligence and algorithms in the burgeoning metaverse 

The metaverse conversation has spiked. It’s on every digital wall and circles most conversations about innovation, the future, and immersive digital reality. It is also a layered and nuanced concept that asks the world to hit pause before it steps inside the digital walls, before people become half unicorn-mermaids, and every asset and sale is digitised in a wholly virtual realm, because the world isn’t ready. The technology isn’t ready.  The hardware necessary to create the virtual worlds that people are imagining doesn’t exist yet, but it will soon.  The bigger problem, which can’t be solved with technology, is that these systems contain inherent bias – bias built into algorithms and systems that will fundamentally impact how people experience and live in the metaverse. 

In 2019, the Apple credit card launch met with immediate problems. It was sexist. The algorithm would offer smaller lines of credit to women than men, even though gender was not one of the input methods. While the latter fact was issued as defence against the algorithm’s bias, the reality is that bias is built from multiple foundations that can be anything from location to age to job description. It took several people applying for credit to really underscore how significantly the card’s algorithm was biased against women – even Steve Wozniak stepped in saying that he got 10x more credit than his wife, and they don’t have separate bank accounts.  

The MIT Media Lab Gender Shades Project found that AI was packed full of bias, right down to its ability to detect skin colour and gender. The AI systems analysed struggled to identify women of colour, and to identify women overall. Google Autocomplete met with significant criticism as far back as 2016 – ongoing in 2018 – with its racist sentence completions that have, as of 2022, been mostly managed but still highlight how bias crept into a system designed for comprehensive neutrality.  Also in 2018, Amazon had to ditch its AI recruitment tool because it had taught itself that men were better candidates based on system bias within the global workforce. In short, the AI spat out men over women when the company was looking for top people to fill its roles. 

Fast forward to 2022 and the problem hasn’t gone away. In fact, it should be very much a blaring red horn on the digital horizon as technology leaders and heavy hitting tech enterprises gallop toward the metaverse saloon. A recent analysis of AI undertaken by the World Economic Forum, researchers underscored the importance of recognising bias through meticulous testing, real world environment modelling, and implementing clear strategies for fairness and non-discrimination.  As the report points out, it is far too easy for ‘the existing bias in our society to be transferred to algorithms’. 

Now, apparently on the brink of diving into the metaverse, Meta has embarked on building the world’s fastest supercomputer.  Called the AI Research SuperCluster (RSC), it’s being built to amplify the company’s next generation of AI, because only AI is going to be intelligent and fast enough to handle the vast amounts of data being produced.  RSC is designed to learn from trillions of datasets across languages, texts, cultures, images, and video so it can handle real-time translations, collaborations and experiences. This sounds amazing but it’s being built by the same company that’s been found to have gender bias with its most recent issue being the promotion of certain job ads only to men. Research has also found racially discriminatory content on the social media platform.  

Meta is not alone and Twitter has also been called out on its algorithmic biases and so has LinkedIn. All very much in favour of men, particularly white men. Which is nice for them, but not nice for the remaining – very large – percentage of the population.  These examples shine a spotlight on the fact that even if humans are ready for AI, AI isn’t necessarily ready for humans. It’s not even remotely ready for the immersive, predictive, and intuitive digital life put forward by proponents of the metaverse. 

AI is not truly independent. It is reliant on those that create it.  And to change this dialogue, it’s essential that there is a fundamental change in how AI is designed and created, and the power that it has. If it does become the controlling force in the metaverse it can very easily entrench, or expand, the cracks that are so apparent in society today.  

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.