Select Page

JudgeRaccoon3306
Problem and Solution Essay of Use of Robots in The Police Force  …

Problem and Solution Essay of Use of Robots in The Police Force

 

In the lawless America that we live in today, an America where crime rates have skyrocketed across the country, and the police who are duty-bound to protect us from crime are sometimes criminals. Who do we then charge with the crucial task of protecting the citizens? Some suggest that we make the laws stricter to deter criminals, some suggest we increase policing across the country, and some say, “Let us replace the human police with Robots.”. While there are multiple concerns about the use of robots in the police force such as. Who would be responsible for controlling them, who will be manufacturing them and the AI they use to make decisions, and the effect they would have on the minorities who are already overpoliced unjustly, there are solutions such as diversifying the tech workplace, making reforms in the human section of police before introducing robots, and doing more research on the effects it would have on society at large before implementing it. 

First, robots that make racist decisions are controlled by people who are racist or AI that is racist. At the core of a robot, there is no autonomous self-thinking brain. There has to be someone at the helm controlling the machine. But recently, all that changed with the introduction of Artificial intelligence. AI is capable of making decisions on its own based on complex mathematical algorithms and the statistical data it’s given. Still, when the data provided to the AI is biased to one spectrum of the country, it can lead to horrible things happening to the other side. It is well known that blacks don’t make up most of the American workforce and are even more underrepresented in the tech workforce “Black workers, who comprise 11% of total employment across all occupations, are 9% of STEM workers. This is unchanged from 2016. Black workers account for just 5% of engineers and architects and 7% of workers in computer occupations.” (“STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity”). A grave under-representation of black Americans in the tech space will lead to the creation of robots and AI that favor only the majority of the people who created them. 

Second, the people who code Artificial intelligence do not test it rigorously for accuracy on a big diverse population. When you are creating a thing that will be responsible for protecting society and enforcing the law you have to be 100% certain it is unbiased, accurate, and only acts on the most reliable and correct data. We already know that no matter how much we try, human bias will always exist, and unfortunately, this bias makes its way to police organizations and other important societal organizations. This bias leads to racist acts and horrible crimes against marginalized people. And this bias due to poor testing can make its way into AI. “researcher Kate Crawford mentions the controversy surrounding Google’s photo application, which, in 2015, accidentally classified images of African Americans as gorillas.” (Kutateli). When the data that is fed into the AI is flawed and not checked, it can lead to horrible errors like that. Another example, “Another problem with the algorithms is that many were trained on white populations outside the US, partly because criminal records are hard to get hold of across different US jurisdictions. Static 99, a tool designed to predict recidivism among sex offenders, was trained in Canada, where only around 3% of the population is Black compared with 12% in the US. Several other tools used in the US were developed in Europe, where 2% of the population is Black. Because of the differences in socioeconomic conditions between countries and populations, the tools are likely to be less accurate in places where they were not trained.” (Douglas). When the data that is used to train AI is not highly regulated and checked for accuracy and population distribution, it should not be used in robots that would be responsible for policing any population. 

Third, if the robots were used on a large scale, the effect they would have on minorities would be most damaging. The data that is used to train the AI that operates the Police robots are not trained on diverse populations which will lead to instances where the robots would make decisions based on harmful stereotypes backed into while it was being trained. “A robot operating with a popular internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face. The work, led by Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely used model operate with significant gender and racial biases.” (“Flawed AI Makes Robots Racist, Sexist | Research”). If robots were used in the police force and still had these flaws in them; it would lead to the unjust arresting, sentencing, and in the worst case, killing of black people and other underrepresented minorities that are not represented in the data that AI/robots use to learn.

On the other side, software developers, the judicial system, and the police benefit from these problems created by the use of Robots in the police force. The software developers of  AI would benefit because they would profit from selling harmful Ai. The judicial system would benefit from it because of the influx of wrongfully arrested people to prosecute, and they would also use AI to do the prosecution which could cut costs on hiring a judge. The police would also benefit because they would be leaving all the heavy lifting to the dangerous robots.

While there are problems with the use of robots in the police force, there are some solutions. The first solution that would have a net benefit once fully implemented is diversifying and creating more job opportunities for black people and other minorities in the Technology and computer science field. Doing this will lead to a world where the major inventions that were built to be used by everyone in society will be built by the society itself, and not by the majority who ignore the minority. In the article “Can We Make Our Robots Less Biased Than We Are?” by David Berreby; the author proposes a solution to the lack of diversity in the tech field. In the article he talks about a letter he sent to the major head of the tech industry, “The open letter is linked to a page of specific action items. The items range from not placing all the work of “diversity” on the shoulders of minority researchers to ensuring that at least 13 percent of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to evaluations and promotions. It also asks readers to support organizations dedicated to advancing people of color in computing and A.I., including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code, and Black in A.I.” (Berreby). In this letter, he talks about the solutions that should be passed in colleges around America that could solve the problem of racial bias in the tech field. The amount of time it would take to balance the uneven distribution of blacks and other races in the tech field 

A second solution to the issue of the use of robots in the police force would be to make reforms in the human section of police before introducing robots. We all know that the police officers themselves are human and are susceptible to human biases. Before introducing robots, let us first solve the issue of bias in human officers. In the article “Can We Make Our Robots Less Biased Than We Are?” by David Berreby; the author proposes a solution to the need for reform in our police. In the article, he talks about how black students feel unsafe, and unwelcome on campuses to constant harassment from campus police. “The statement calls for reforms, including ending the harassment of Black students by campus police officers and addressing the fact that Black people get constant reminders that others don’t think they belong” (Berreby). In this statement, he is calling for a reform in the police and their attitude towards black people, on college campuses. If this were to happen, black people would feel safer in the colleges they attend which would lead to more of them entering the STEM field, and allowing for a more diverse workspace. The time frame for this solution to work is only dependent on the police officers. 

Finally, a third solution to the issue of the use of robots in the police force would be doing more research on the effects it would have on society at large before implementing it. In any situation where a big change is introduced to society, at large, it is only done after rigorous testing. This should be the same for AI and robots. Before introducing such drastic measures; intensive research and trials should be done to evaluate the safety, benefits, and drawbacks such measures will have if implemented in society at large. In the article “How a Machine Learns Prejudice” by Jesse Emspak. The author proposes a solution to doing more research on the effects it would have on society at large before implementing it. “Still others say the technology could be improved by accounting for errors in the patterns computers learn, in an attempt to keep out human prejudices. An AI system will make mistakes when learning—in fact, it must, and that’s why it’s called “learning,” says Jürgen Schmidhuber, scientific director of the Swiss AI Lab Dalle Molle Institute for Artificial Intelligence. Computers, he notes, will only learn as well as the data they are given allows. “You cannot eliminate all these sources of bias, just like you can’t eliminate these sources for humans,” he says. But it is possible, he adds, to acknowledge that, and then to make sure one uses good data and designs the task well; asking the right questions is crucial.” (Emspak). In this excerpt, the author talks about the importance of always testing the AI and improving the data it is given. AI will only act on the data that is fed to them, and the stereotypes that they see in the data sets. Making a constant effort to remove bias from these experiments and research will lead to a less racist outcome. The time frame for this solution is only dependent on the researchers and how quickly they change the data used in their research. 

While there are people who agree with these solutions to the use of robots in the police force, there are only a few who disagree with the three solutions I mentioned above. Investors and shareholders are the people who invest in the research and programs that create these robots, and they usually like to cut costs and get fast and profitable results. logically these are the only people who would disagree with the collusion proposed above, due to how much time it would add to the production of the AI robots. 

In conclusion, the use of robots in the police force is inevitable. As humanity becomes more and more technologically advanced, better solutions will be applied to existing problems and robots sound like a good solution to the problem of dangerous crime. A good solution does always mean good 100% of the time, but if we apply the solution purpose above, the use of robots in the police force can benefit the black community and humanity in general.

 

Works Cited

Berreby, David. “Can We Make Our Robots Less Biased Than We Are? (Published 2020).” The New York Times, 3 December 2020, https://www.nytimes.com/2020/11/22/science/artificial-intelligence-robots-racism-police.html. Accessed 5 May 2023.

“Cop-Controlled Robots Authorized to Kill in San Francisco.” YouTube, 1 December 2022, https://www.youtube.com/watch?v=BzHqSaY0ZJ8. Accessed 5 May 2023.

Douglas, Will. “Predictive policing algorithms are racist. They need to be dismantled.” MIT Technology Review, 17 July 2020, https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/. Accessed 5 May 2023.

Emspak, Jesse. “How a Machine Learns Prejudice.” Scientific American, 29 December 2016, https://www.scientificamerican.com/article/how-a-machine-learns-prejudice/. Accessed 5 May 2023.

“Flawed AI Makes Robots Racist, Sexist | Research.” Georgia Tech Research, 23 June 2022, https://research.gatech.edu/flawed-ai-makes-robots-racist-sexist. Accessed 5 May 2023.

“Joy Buolamwini: How I’m fighting bias in Algorithms.” TED, 9 March 2017, https://www.ted.com/talks/joy_buolamwini_how_i_m_fighting_bias_in_algorithms?language=en. Accessed 5 May 2023.

Kutateli, Kristina. “Are Robots Susceptible to Racial Bias?” Pacific Standard, 15 July 2016, https://psmag.com/news/are-robots-susceptible-to-racial-bias. Accessed 5 May 2023.

Murray, John. “Racist Data? Human Bias is Infecting AI Development.” Towards Data Science, https://towardsdatascience.com/racist-data-human-bias-is-infecting-ai-development-8110c1ec50c. Accessed 5 May 2023.

“STEM Jobs See Uneven Progress in Increasing Gender, Racial and Ethnic Diversity.” Pew Research Center, 1 April 2021, https://www.pewresearch.org/science/2021/04/01/stem-jobs-see-uneven-progress-in-increasing-gender-racial-and-ethnic-diversity/. Accessed 5 May 2023.

Verma, Pranshu. “Robots trained on AI exhibited racist and sexist behavior.” The Washington Post, 16 July 2022, https://www.washingtonpost.com/technology/2022/07/16/racist-robots-ai/. Accessed 5 May 2023.

 

 

 

 

 

 

 

 

This assignment asks writers to identify a problem and advocate for a way to address, solve, or resolve that problem. Has the writer clearly identified a problem and solution? What are they?

 

 

 

 

 

 

 

Can you tell who the writer identified who is impacted by or needing to know about this problem and solution? Who is the audience?

 

 

 

 

 

 

 

Setting forth a plan for addressing/resolving the problem includes explaining costs, benefits, and steps, comparing the problem to others, and explaining the feasibility of the proposed solution. Is the writer spelling out their solution in enough detail? If yes, what are they doing well? If no, where do they need to expand in more detail?

 

 

 

 

 

 

 

The proposal should be formatted in a manner appropriate to a professional setting, including strategies like subheadings, graphs or charts, relevant images, or textboxes. Has the writer presented their proposal in a format that would be appropriate in a professional environment? If so, how? If not, what do you recommend?