A regular reader sends a WSJ link.
Apparently social media software shows cultural bias. An image-scanning software's initial data set was predominantly white. (The article does not say what the percentages were, and whether the predominance simply tracked American demographics or was more than that.) Additionally, because there are more white faces than black ones on the internet, a software that continually learns from experience amplifies this bias.
Fixing the imbalance is certainly the first priority for the software developers. But another use immediately occurred to me. Observing how the software goes wrong might be useful in learning how we learn our biases, including the black box of what "initial data set" is hard-wired into us, and whether that can be compensated for. In the examples from the article, there was some intuitive connection between how we act and how the software acts, making it quickly understandable. There might be more that is not immediately noticeable.
There may be a hidden problem in that. If software modification gives us insight into how we ourselves might be modified, couldn't it be used to increase our biases rather than decrease them, in a manner which reflected the desires of the more powerful?