3 thoughts on “Weapons of Math Destruction”

  1. To paraphrase the author: “a Ted Talk can be as biased as she claims algoritms are”.
    Let’s see, a green-haired woman, with a dog-whistle to left when she bases her title on the phrase ‘weapons of mass destruction’, whose first example is against Fox News and sex discrimination. My! Quel surprise! And then she rails against teacher metrics! Could she be more stereotypically biased for the left?

    Then she goes to righteously claim that policing is based on bias, segregation, and data manipulation. I have studied crime data and she has absolutely no leg to stand on here. Black crime statistics are, consistently, an order of magnitude above white crime statistics and yet 45%+ of the arresting officers are black.

    “Algorthims automate the status quo” is an absurd conclusion, even based on the anecdotal evidence she thinks she’s supplying. It would require only evil right-wingers to be creating the algorithms. What evidence does she have of that? In fact, I would argue that algorithms represent a desire to move to a future state condition and not a current state condition. Good or bad, they are, by definition, not status quo seeking.

    This person insists that the ‘black box algorithms’ must be manipulated because they produce results she abhors.

    There’s only one thing that’s clear to me here. Yes, you can use algorithms to produce biased results in the exact same way you would with a word processor. “How can we expect the algorithms to remain unscathed? We have to check them for ‘fairness’ “. No. She’s exactly wrong. You cannot impose your interpretation of fairness on the data. If black-sounding names, as she claims, have less success in job applications its because of the data-training that hiring managers have received. If “Tawana” or “Mohammed” suggest more trouble than “Mary” or “Chang-lee” you need to consider the underlying reasons for this. Irrational hatred or prejudice on the hiring manager’s part is only one thing… avoiding law suits from litigious, righteous, (perhaps even green-haired) civil rights activists may be another.

    After all, your internal bias engine is essentially Bayesian. It will learn and modify itself over time with more data. It is somewhat immune to top-down directives.

    1. I think going ad hominem in the first paragraph is a clue that the political content has distracted you from the underlying generic structure.

      “It would require only evil right-wingers to be creating the algorithms” is contrary to the talk. The whole point is that the algorithms inadvertently reflect existing biases – no need to invoke evil intent.

      You’ve basically taken as a premise that there is only rational discrimination. If that were true, then Bayesian updating would work. But it’s not true in a world with positive feedback and cognitive biases.

      Take a look at my posts on teacher VAM. The New York model output is demonstrably, objectively garbage. It’s not a right or left issue, it’s about good or bad implementation of a potentially useful measurement strategy, and open loop vs. closed loop thinking.

      1. I didn’t go ad hominem — I merely described the speaker with the cultural clues that were evident before me: green hair, anti-Fox commentary, cherry-picking a teacher anecdote to challenge the very concept of measuring teacher performance. C’mon. If a Ted Talker were wearing a Confederate flag shirt and skinhead haircut you and immediately used hackneyed and biased arguments to make their point you, anyone, would put up strong cognitive filters.

        Her whole point is that “Algorithms might be biased”. She inadvertently made her point by covering the topic in a biased way.

Leave a Reply to Pancho Rivas Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.