1 minute read

🔗 Link to the Bloomberg article

The revelations from Bloomberg’s analysis are troubling but not entirely surprising. As the AI saying goes, “Garbage in, garbage out.” These models are only as good as the data they’re trained on. And the internet, as we know, contains a lot of garbage.

Still, the magnitude of bias uncovered is disheartening. The stereotypes perpetuated by Stable Diffusion’s outputs are downright retrograde. We’d like to think society has made progress stamping out these outdated associations, but they stubbornly persist in AI.

Particularly alarming is how AI could further entrench racial injustice within the criminal justice system. The idea of police sketch artists relying on inherently biased models makes one queasy. As we’ve seen with other flawed technologies like facial recognition, once these tools get in the hands of law enforcement, they’re nearly impossible to dislodge.

On the other hand, perhaps shining a light on these issues can lead to positive change. The creators of these models generally want to address problems with bias. But doing so requires thoughtful effort and investment. As with most tech, incentives tend to favor rushing products to market. Public pressure and regulation may be needed to ensure ethics get priority.

There are also technical solutions like better datasets and techniques to reduce learned biases. Progress is being made, but there’s still a long way to go. And developers can’t do it alone. Users of these models — big tech firms, advertisers, artists — also need to monitor outputs and provide feedback to improve them over time.

This is still early days for generative AI. Expect stumbles as we figure out how to wield these incredibly powerful tools responsibly. But we must remain committed to the effort. The stakes are too high to let our human biases metastasise through these algorithms unchecked. There is wisdom in tapping the brakes, as some AI leaders have cautioned. Better to proceed carefully than to unleash harms that can’t be undone.